How can the economic and social benefits of artificial intelligence (AI) be strengthened while its ethical and human rights risks are addressed? This is a question that drives the current policy debate, that exercises researchers and companies and that interests citizens and the media. It is the question that the upcoming book by Bernd Carsten Stahl provides a novel answer to. Drawing on the work of the EU project SHERPA, Stahl’s book proposes that using the theoretical lens of innovation ecosystems, we can make sense of empirical observations of the role of AI in organisations and society. This perspective furthermore allows for drawing practical and policy conclusions that can guide action to ensure that AI contributes to human flourishing.
Komentáře
Thank you for sharing this, the ecosystem lens really resonates with what I’ve been working on from a complementary angle.
In my own work I’ve been trying to map how AI systems, organisations, users and regulators fit together as one “alignment ecosystem”, with a special focus on relational and trust dynamics (anthropomorphism, artificial intimacy, vulnerable users, etc.). Instead of only asking “is this model accurate or fair?” we also ask:
- What is the system really doing and optimising for (its nature)?
- How does it present itself in people’s lives (its representation)?
- Where are the gaps between those two – and who pays the price for those gaps?
I call this Reality-Aligned Intelligence (RAI). A few pieces that might be of interest alongside your ecosystem approach:
RAI Atlas: Mapping the World of Reality-Aligned Intelligence – Systems, People, and Artificial Intimacy (ecosystem overview, roles, failure modes, actors)
Reality-Aligned Intelligence (RAI) Governance & Ecosystems (how different institutions, incentives and responsibilities interlock)
Reality-Aligned Auditing (RAA): A Governance Stack for Ontologically Honest, Relationally Safe AI (practical audit layers L0–L3, labels, enforcement paths)
Would be very happy to explore how your innovation-ecosystem framing and the RAI / RAA ecosystem map might speak to each other – especially around institutional roles, audit practice and long-term human flourishing.