Recent announcements of significant investments of leading tech companies in artificial intelligence (AI), accompanied by increasing marketing activities, indicate that AI is on the edge of entering the business arena on large scale. At the same time, public awareness challenges the quality and reliability of prominent show cases like chatbots and reveals serious flaws actually endangering the usability of these applications. Furthermore, content creating professionals realize the threats of AI systems and begin to claim their economic interest to be protected. And the admission of experimental use cases with autonomous cars in public traffic environments brings the debate about technical and legal governance into focus again. All this calls for the concept of accountability of AI being introduced on operational level now.
Accountability has been named as one of the seven principles the European Union seeks to be ascribed to ethically founded AI[1]. In their White Paper on Artificial Intelligence[2] the European Commission pointed out their approach to risk in AI to focus on rights protection, safety, and liability. Both these elements have found their way into the Artificial Intelligence Act of the EU to be passed soon.
Underlying building blocks of accountability are responsibility and liability. Moral responsibility refers to the ethically grounded (personal) status of an agent, while in business professional responsibility usually means task-oriented, functional „being-in-charge-of“. Liability, in the business context, is mostly used in the sense of legal liability, meaning the obligation to follow rules and regulations laid down in law, and can include also financial obligation to compensate damage caused by an action.
Responsibility in AI. Like in other technological fields, responsibility rests upon several kinds of agents involved, in particular on developpers, users, and intermediary agents. In the case of autonomous vehicles, it is subject to vivid debate who of these must bear which responsibility in practice. Even more complex is the situation when it comes to generative AI. When an algorithm generates content, be it textual, visual, or audible, who bears responsibility for the outcome? This issue is well known from the dissemination of content on social media, where human agents are behind, but may become a massive problem once generative AI applications come into use on large scale.
Valid approaches to this problems can start from organizational measures on the developper side, an example is the Practical Organizational Framework for AI Accountability suggested on this platform[3]. While it seems to be suitable for large enterprises, it might be adjusted so to meet the requirements of small players with less organizational capacity, as well as to those of intermedaries. Or, using a different angle, from the risk management concept suggested in Applied Artificial Intelligence: Risk Mitigation Matters [4]. Both these – and probably there are others around – can comply with what the European Commission’s AI Act requires for high risk AI applications, or recommends for those of medium risk.
Liability in AI. For the examples mentioned above – autonomous driving and generative AI –, which are the actually most prominent types of AI based algorithmic systems, the need of regulative governance cannot be contested. Public traffic and car driving are the perhaps most regulated key practices of our social and economic living. So there is much familiarity with regulation in this field. It will, however, need smart extension as to cover the novel challenges that come with machines based on automated decision making in social context and representing diverse severe dangers within and to their ecosystem. Liability legislation here will require thorough deliberation – even more so as the fundamental philosophies of liability regimes significantly differ globally: e.g., in the US there is more focus on re-active liability, while in Europe it is more pro-active. This difference holds also in the case of generative AI. There would be strong demand for a unified liability regime, however this seems to be wishful thinking at the time being. In any case, soft guidance via moral or professsional responsibility as one building block for accountable AI is a must, but it has to be combined with strong guidance via liability legislation. This seems to be common understanding which right now develops in Europe and the US, whereas the best way is still to be found.
____________________
Norbert JASTROCH
eMail norbert.jastroch@metcommunications.de
____________________
[1] High Level Expert Group on Artificial Intelligence (AI HLEG): Ethics Guidelines for Trustworthy AI. European Commission, Brussels (2019)
[2] European Commission: White Paper on Artificial Intelligence. A European approach to excellence and trust. Brussels (2020)
[3]https://futurium.ec.europa.eu/en/european-ai-alliance/best-practices/practical-organizational-framework-ai-accountability
[4] Jastroch, N. (2022): Applied Artificial Intelligence: Risk Mitigation Matters. Organizations. https://doi.org/10.1007/978-3-030-94335-6_20
- Logáil isteach chun tráchtanna a phostáil