European AI Scanner offers an efficient, secure, and robust AI implementation across any business, standardizing processes and workflows across AI projects, including all elements from data ingestion, model development, one-click deployment, and model life cycle management. In summary, European AI Scanner covers the full range of value-added functionality your business needs to be EU AI Act compliant. With European AI Scanner, you have access to a comprehensive, yet seamless, solution to comply with the EU AI Act, and internal policies to documentation and report to different functions, e.g., CSR, ethical charters, and boards. European AI Scanner includes a flexible solution for firms and regulators to construct tangible metrics, such as trustworthy AI. The configurable Rules Engine and Impact Assessment module in European AI Scanner offer many additional options to guarantee compliance for other EU AI Act requirements. European AI Scanner ensures that the growing compliance for AI does not become a risk for slowing or stopping AI implementation.
- Illoggja biex tippowstja l-kummenti
Kummenti
Really appreciate this contribution, Michael – especially the attempt to give organisations a single, practical spine for AI-Act compliance instead of yet another abstract checklist.
I’ve been working on something complementary from a different angle: ontological honesty and artificial intimacy in AI systems – i.e. how honest a system is about what it really is, and how high the risk is of anthropomorphism and relational drift in chat- or companion-like interfaces. That work sits under the umbrella of Reality-Aligned Intelligence (RAI) and Reality-Aligned Auditing (RAA) and could potentially plug into a scanner/rules-engine like yours as an extra “relational safety / OH(S) score” alongside the more classical risk metrics.
A few relevant open-access DOIs, in case this resonates with your roadmap:
RAI metaframework (ontological honesty & drift metrics):
Reality-Aligned Intelligence (RAI): A Metaframework for Ontologically Honest AI Systems – https://doi.org/10.5281/zenodo.17686975
Governance & audit layer:
Reality-Aligned Auditing (RAA): A Governance Stack for Ontologically Honest, Relationally Safe AI – https://doi.org/10.5281/zenodo.17814922
Reality-Aligned Intelligence (RAI) Governance & Ecosystems – https://doi.org/10.5281/zenodo.17691268
Metrics / implementation for builders:
RAT / RAI Metrics: A Preliminary Formalisation v1.0 – https://doi.org/10.5281/zenodo.17689101
Reality-Aligned Intelligence (RAI) Engineering & Evaluation Guide – https://doi.org/10.5281/zenodo.17689017
If at some point you’re interested in exploring how RAI/RAA metrics for relational risk and ontological honesty could be integrated as an additional module or label inside the European AI Scanner, I’d be very happy to compare notes or share concrete test prompts / rubrics we’ve been using.
Best,
Niels Bellens – independent researcher, AI & relational safety
ORCID: 0009-0008-1764-4108
Email: niels.bellens@proton.me
- Illoggja biex tippowstja l-kummenti