Reality-Aligned Intelligence (RAI): A Practical Lens on Anthropomorphic & “Companion” AI under the EU AI Act

I’m sharing a short policy brief that translates the Reality-Aligned Intelligence (RAI) framework into concrete tools for EU AI Act implementation.

 

The focus: AI systems that present themselves as caring “almost-persons” – tutors, companions, coaches, therapists – while in reality they are statistical models with very different limits and incentives. The brief introduces four simple questions (N/R/OH/A), the Ontological Integrity Line (OIL) and Integrity Zones, and shows how these can make existing provisions on deception, transparency and protection of minors more operational. 

 

I’d be very interested in feedback from policymakers, supervisors and practitioners on whether this framing is useful for guidance, standards and codes of practice around anthropomorphism, affective AI and AI “companions”.

Reality-Aligned Intelligence (RAI) A Practical Lens for Anthropomorphism & Artificial Intimacy under the EU AI Act
Clibeanna
AI Governance ai regulation blog discussion

Tráchtanna

Profile picture for user n00l1spp
Curtha isteach ag Niels Bellens an Mon, 08/12/2025 - 14:00

Thank you for the opportunity to share this. The attached brief translates my open Reality-Aligned Intelligence (RAI) framework into practical questions for AI systems that present themselves as tutors, companions, coaches or “therapists”, with a focus on anthropomorphism, artificial intimacy and attachment risk under the EU AI Act.

 

I’d be especially interested in feedback on:

• how the N(S) / R(S) / OH(S) / A(S) lens could support risk assessment and enforcement (e.g. Art. 5, transparency, protection of minors);

• whether simple N/R/OH/A checklists and OIL/Integrity Zone concepts might be useful in upcoming guidance, standards or codes of practice;

• opportunities for small pilots with child-facing or mental-health-adjacent systems.

 

All RAI materials are open-access on Zenodo, and I’m very open to critique, refinement and collaboration.