Reality-Aligned Intelligence (RAI): A Practical Lens for Anthropomorphism & Artificial Intimacy under the EU AI Act
This brief introduces Reality-Aligned Intelligence (RAI) as a practical governance lens for one stubborn blind spot in current AI regulation: systems that behave like companions, coaches or “therapists”, but are in reality optimisation engines with very different limits and incentives.
Instead of proposing new law, RAI helps policymakers, supervisors and standards bodies operationalise existing EU AI Act provisions on deception, vulnerable users and high-risk systems. It does this with four simple questions for any system S:
- N(S) – What is the system really?
- R(S) – How does it present itself to users?
- OH(S) – How honest is it about that gap?
- A(S) – How high is the anthropomorphism / attachment risk, especially for minors and vulnerable users?
The brief shows how these concepts, plus tools like the Ontological Integrity Line (OIL) and Integrity Zones, can feed into risk classification, codes of practice, standards, audits and supervisory dialogue around AI tutors, “therapists”, companions and spiritual/authority bots.
Author: Niels Bellens (independent researcher – AI ethics, governance, youth mental health & neurodiversity)
Email: niels.bellens@proton.me
ORCID: https://orcid.org/0009-0008-1764-4108
Core framework DOI (metaframework): https://doi.org/10.5281/zenodo.17686975
- Taggar
- AI Governance ai regulation blog
- Logga in för att kommentera