I’m exploring a governance question adjacent to the EU AI Act:
how organizations can later demonstrate, with objective evidence, why they relied on (or rejected) AI output in a consequential decision, and, who owned that judgment at the time.
Most existing work focuses on model and system controls. The gap I’m examining sits one layer above the model: governing human judgment when AI is advisory or predictive, especially as use scales.
I’ve published an initial framework and had developed a maturity model and would welcome critique or practitioner perspectives from an EU implementation or supervisory context. The exec summary and public facing maturity model are attached.
Background reading (optional): https://doi.org/10.5281/zenodo.18216166
- Za objavu komentara morate se prijaviti