How can we build institutions for AI governance? Focussing on a European AI Agency.

Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms, drawing on a variety of approaches and instruments from hard regulation to standardisation efforts, aimed at mitigating challenges from high-risk AI systems. To implement these and other efforts, new institutions will need to be established on a national and international level. This recent paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions, exploring benefits and associated drawbacks: (1) “purpose”, relating to the institution’s overall goals and scope of work or mandate; (2) “geography”, relating to questions of participation and the reach of jurisdiction; and (3) “capacity”, the infrastructural and human make-up of the institution.



Subsequently, the paper highlights noteworthy aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. This could be particularly interesting also with regards to recent discussions on European AI Lighthouses. It concludes by proposing future research directions. 



You can read the full analysis and proposal here: https://link.springer.com/article/10.1007/s43681-021-00093-w

Ετικέτες
ai ethics European AI Agency Institution Building study

Σχόλια

Profile picture for user dervijo
Υποβλήθηκε από τον χρήστη Jola DERVISHAJ στις Παρ, 15/10/2021 - 14:51

Great work, thanks for sharing!

Profile picture for user n00l1spp
Υποβλήθηκε από τον χρήστη Niels Bellens στις Δευ, 08/12/2025 - 14:10

Thanks for sharing this — really appreciate the way you break down “purpose / geography / capacity” as design dimensions for future AI institutions.

 

I’ve been working on a complementary line called Reality-Aligned Intelligence (RAI) that tries to answer: what exactly should these institutions be supervising, and with which levers? In particular, I focus on ontological honesty (how systems represent themselves vs what they are) and artificial intimacy, which is becoming a cross-cutting governance problem.

 

Three open-access pieces that might be relevant to your blueprint work:

Reality-Aligned Intelligence (RAI) Governance & Ecosystems – institutional roles, power flows and “who does what” in a RAI-style ecosystem (DOI: 10.5281/zenodo.17691268)

Reality-Aligned Auditing (RAA): A Governance Stack for Ontologically Honest, Relationally Safe AI – proposes a layered audit architecture (L0–L3) that regulators or a future EU AI Agency could adopt (DOI: 10.5281/zenodo.17814922)

The Right to Safe AI Use: A Draft Normative Proposal and Implementation Path via Asymptotic Intelligence (AsI) – sketches a rights-based framing that could sit inside or alongside an EU AI Agency mandate (DOI: 10.5281/zenodo.17631093)

 

I’d be very interested in how your “purpose / geography / capacity” lens and a RAI/RAA-style audit stack might fit together in concrete proposals for a European AI Agency. Happy to share a short summary or compare notes if that’s useful.