Toward Semantic Governance: A Structural Proposal to Support the AI Act Implementation

The recent Article 73 consultation highlighted unresolved issues around semantic-level incidents, meaning drift, and the lack of traceable reasoning structures in AI-assisted decision processes.

To support the AI Act’s objective of trustworthy and accountable AI, I would like to share an approach based on a “semantic control layer” that sits outside the model and preserves human intent, semantic boundaries, and auditable reasoning paths.

This conceptual framework — the Concept Resonance System (CRS) — aims to provide:
• semantic consistency tracking across AI-assisted processes,
• external reasoning logs independent of model parameters,
• traceable intent-to-outcome chains for risk monitoring and governance.

I recently submitted a contribution to the Article 73 targeted consultation including concrete definitions, incident types, and structural interpretations.
The full document is available here:

I hope this viewpoint contributes to the ongoing discussion on AI governance, semantic robustness, and the future implementation of the AI Act.

Article 73 Contribution — Semantic Risks and Control Structures
Tags
AI Act Article 73 AI governance Risk management Semantic risk Explainability Traceability Trustworthy AI ai regulation discussion

Comments

In reply to by Niels Bellens

Profile picture for user n00krgn3
Submitted by Mototsugu Shiraki on Thu, 15/01/2026 - 09:45

Apologies for the slightly delayed reply—and thank you again for such a thoughtful and well-articulated comment.

I appreciate the way you framed the issue as failures of meaning, not mechanisms; that captures very precisely the layer I am trying to surface. Your N(S) / R(S) / D(S) formulation is particularly insightful, especially in how it treats the gap between a system’s actual nature and its presented role as an explicit governance variable rather than a side effect.

From my own position—coming primarily from a business, governance, and implementation background rather than a deep technical one—what I find especially valuable in your RAI work is how clearly it translates ontological concerns into variables that can be discussed in regulatory, audit, and operational contexts.

I see a strong complementarity between the two perspectives. As you note, CRS focuses on semantic drift inside systems—how intent, meaning, and decision rationales evolve and sometimes fragment over time—while your RAI framework addresses a different but equally critical failure mode: reality drift in human-facing representations, including anthropomorphism, artificial intimacy, and mis-calibrated trust.

Taken together, this suggests a layered governance picture:

  • semantic controls to maintain internal coherence and traceability of meaning, and
  • ontological / relational controls to ensure honest, bounded representation toward humans.

These layers address distinct but interacting legitimacy risks—particularly relevant, in my view, for post-market monitoring and auditability questions such as those raised in Article 73.

Thank you as well for sharing the Zenodo papers. I will review them with interest from this governance and implementation perspective.

I would be glad to continue exchanging views here and to explore where these approaches might align—or deliberately diverge—in concrete regulatory contexts.

Profile picture for user n00l1spp
Submitted by Niels Bellens on Mon, 08/12/2025 - 14:13

Really appreciate this focus on the semantic layer – “failures of meaning, not mechanisms” is exactly where so many real-world incidents live. The Concept Resonance System idea of an external semantic control layer feels very complementary to something I’ve been working on from another angle: Reality-Aligned Intelligence (RAI).

 

Where CRS tracks meaning drift inside the system, RAI tries to track reality drift in how systems present themselves to humans – especially in relational roles. In RAI we model a system’s:

 

  • N(S) – what it actually is/does (capabilities, limits, incentives), and
  • R(S) – the story/interface/role it presents (“coach”, “companion”, “assistant”, etc.),

 

 

and then treat the gap D(S) = d(N,R) as an explicit governance variable for anthropomorphism risk, artificial intimacy and mis-calibrated trust.

 

For high-risk and relational systems, that gives you:

 

  • semantic controls (like your CRS) to keep internal meaning aligned, and
  • ontological / relational controls (RAI) to keep the human-facing representation honest and inside safe boundaries.

 

 

If useful for your work on semantic governance & Article 73, I’ve written up:

 

 

 

Would be very interested in comparing notes – your semantic control layer and RAI’s ontological/relational layer look like two missing pieces of the same governance puzzle.

Profile picture for user n00l1spp
Submitted by Niels Bellens on Wed, 10/12/2025 - 10:12

Thank you for this thoughtful contribution highlighting semantic-level failure modes. I strongly agree that many of the most serious risks in advanced AI systems are not captured by traditional notions of malfunction, but emerge from meaning-level misalignment: drift in internal concepts, divergence from human intent, and boundary overreach in how systems are used.

 

I’m working independently on a closely related line of research called Reality-Aligned Intelligence (RAI) and Artificial Integrity (AI), which might be useful as a complementary lens for the Commission’s guidance on Art. 73 AI Act.

 

In brief:

 

  • We model each system as having a Nature vectorN(S) (what it really is and can do) and a Representation vectorR(S) (what it presents itself as to users).
  • The honesty distance between N(S) and R(S) defines Ontological Honesty (OH): when this gap grows, semantic deviation and “relational drift” become more likely, even if the system is technically performing as designed.
  • For high-risk and relationally sensitive settings (Annex III domains, minors, mental health, education, etc.), we define an Ontological Integrity Line (OIL) – a category-specific ceiling on how “human-like” or agent-like a system is allowed to present itself. Above that line, anthropomorphism risk and misuse of trust escalate sharply.
  • We then combine OH, personification level, and relational drift over time into an Artificial Integrity Score, which can be used in audits and incident analysis alongside performance and safety metrics.

 

 

From this perspective, many of the “semantic deviations” you mention can be treated as integrity incidents: cases where the system’s representation to users has silently crossed the OIL or drifted away from its declared nature, leading to harm without an obvious technical error.

 

Two additional elements may also be relevant to incident reporting:

 

  1. Integrity Zones (IZ)

    We distinguish between Narrow, Standard, and Wide Integrity Zones depending on the relational stakes (e.g. minors, mental health, theology vs. simple productivity tools). Serious incidents are more likely when high personification and low OH occur in Narrow Integrity Zones, even if the same behaviour would be tolerable in a low-stakes domain.

  2. Digital DNA (DDNA) – user-held context and boundaries

    DDNA is a proposed open standard where the user maintains their own portable context and boundary file (including hard limits on spiritual, therapeutic, or “friend” roles). This keeps long-term “memory” and alignment above the model and outside the platform, and can help prevent both semantic drift and relational drift in practice.

 

 

I think the Commission’s guidance on Art. 73 could be strengthened by:

 

  • Explicitly recognising representation / meaning misalignment as a reportable cause of serious incidents (not only technical malfunction).
  • Allowing for integrity-grade metrics (OH, personification level, drift) to be used as part of incident analysis in high-risk, relational domains.
  • Encouraging user-side context and boundary standards (like DDNA) as a preventative measure against exactly the kinds of semantic/relational incidents you describe.

 

 

I’d be happy to share the underlying mathematical and governance framework (including open-access Zenodo publications on Reality-Aligned Intelligence, Artificial Integrity, Ontological Honesty and DDNA) if that would be useful for the AI Office or other contributors working on incident definitions and reporting templates.

 

You can reach me at: niels.bellens@realityaligned.org