Following recent engagement with the European AI Office (Jan 22) regarding the need for technical precision in setting thresholds, I am submitting this proposal to address the definition gap between AI-assisted and AI-generated content.
To resolve this ambiguity, I propose the "State Discrepancy" framework as a practical solution.
1. The Core Issue: The "Invisible" Boundary
The current AI Act framework distinguishes between AI-assisted and AI-generated content using qualitative language. However, the exact boundary at which human judgment is effectively ceded to an automated system remains largely undefined in practice.
This creates a systemic risk for both compliance and human agency: users cannot objectively perceive when they are merely receiving assistance versus when judgment has been delegated.
2. Proposal: Visualizing the Boundary via “State Discrepancy”
I argue that the central challenge is making the boundary between human intent and system intervention perceptible. To address this, my research proposes the concept of State Discrepancy:
Definition: A measurable divergence between a user’s original intent state and the AI-mediated system state.
This approach treats the distinction not just as a policy debate, but as a measurable threshold. Defining such thresholds can help prevent alarm fatigue while preserving transparency and legal certainty.
3. Normative Contribution: Judgment Transparency Principle (JTP)
Beyond specific metrics, I propose the Judgment Transparency Principle (JTP):
Whenever judgment is delegated to an automated system, this delegation must be perceivable to the user.
This ensures perceptual transparency at the human–AI interface layer, protecting "Cognitive Sovereignty."
4. Open Framework for EU Standards
To support rapid adoption, I have released the underlying architecture related to State Discrepancy into the public domain.
(Note: I am committed to open licensing to ensure this framework can be freely used as a basis for EU standards.)
Read the full White Paper (DOI)
I hope this contribution supports the discussion on operationalizing transparency in agentic AI systems. I welcome feedback on how this approach could strengthen Europe’s Trustworthy AI ecosystem.
Daiki Kadowaki
Independent Researcher, Japan
- Login for at skrive kommentarer