Thoughts on a Risk Governance Framework for Artificial Intelligence

The pillars of applied artificial intelligence (AI) – data and machine learning – are subject to risk considerations each to a different extent and under specific perspectives. With increasing implementation of AI applications for use by non-experts, risk governance in general will become more relevant, as functions of such applications are intransparent. In particular, proactive risk mitigation gets into focus, because the potential of AI can only be realised if there is broad acceptance of this technology. Our research into chances and limitations of AI during recent years makes us argue for the development of an actionable risk governance method for the purpose of generating the needed trust. The fundament shall be laid by the collection and investigation of sample AI use cases and their risk management approaches. From these, a Risk Management Framework can be derived. We suggest to further elaborate this toward an AI Risk Management Experience base and a respective Risk Management Library for open access on an appropriate platform. The outline of the why, what and how is following.

  1. Risk governance in AI: Why

General reservations toward artificial intelligence relate to the utilization of sensitive private data and the intransparent implications of machine learning operations. Acceptance of AI applications can only be reached if trust can be established. This calls for strong data protection regimes on the one hand, and for supervision of autonomous decision-making processes as part of machine learning functions on the other hand. AI applications rest upon statistical calculation. Outcomes therefore depend on the empirical data used as well as on the algorithmic procedures employed. And they are of probabilistic nature, what requires appropriate interpretation. Of the numerous challenges implied, four qualify as key issues:

  1. For all data utilized, quality must be secured, and privacy protected, and these often conflicting features must be balanced.
  2. Misuse, whether negligent or fraudulent, is a latent threat to face.
  3. The sharing of expertise needs to be organized, especially in the horizontal dimension.
  4. Neutral, human supervision regarding automated decisions or interpretation of results must control resulting actions.

These issues translate into basic risks involved with applied AI. The management of risk, though, is well-established in science, business, and technology. In the health sector, for example, research as well as application in medicine and pharmacology are based on pertinent methodological approaches to the identification and governance of undesired effects in diagnosis and treatment. All but surprising, the management of risk becomes an essential requirement for applied AI in health, but not only there. 

  1. Risk governance in AI: What

Lots of standards have been established for risk management, with not too many differences in their basic approaches. As to AI applications, special attention is needed because they may evolve in time non-transparently. Risk governance therefore must address the whole lifecycle of an application, from development to operational use. In addition to this time related dimension, a categorical dimension is helpful. We suggest considering three risk categories, namely technical (or functional), societal, and cybersecurity risks. In a first order approach, they can be fine grained into:

Technical:      

  • Malfunction, due to false input, failing output, or application fault.
  • misfunction, caused by deficient while not false input or algorithmic flaw.
  • emergence of unintended, unforeseeable output. 

Societal:

  • insufficient compliance with regulation or legal restraints
  • operative effects like bias or noise
  • propagation of flaws along connected or embedded elements of an application
  • ethical dilemma in the case of autonomous systems/automated decision making.

Cybersecurity:

  • external attacks on or making use of AI systems.
  • accidental logical or physical impact on humans or the environment
  • physical failure. 

The two dimensions of time and risk categories can be used to further structure the actionable risk governance into risk events, sources, and responses. Thus, a respective framework can be introduced as an operational tool for risk governance in AI (AI-RG). 

  1. Risk governance in AI: How

Drawing on trust as the crucial feature for the acceptance of applied AI, we argue that risk governance must be pro-active. An initial outline of the methodology separates the phases of system development and operation. It comprises the entry into development with risk identification, including the assessment of relevance, specifications of actors involved, of logical or physical procedures and coupling interfaces – followed by the elaboration of appropriate generic responses. And it continues with evaluating the maturity of the AI functions while in operation, the relevance of operative risks and the tailoring of the generic responses. 

The complete package of work to build actionable pro-active risk governance as we suggest it shall be based on the outlined AI-RG framework*. Its logical extension will be to establish an AI-RG experience base of sample applications and their risk governance concepts. Our interest is to continue working on this topic and make results available through an appropriate platform. In the longer run, and as a reference for the development of new systems of applied AI, we envisage to set up a respective AI-RG library.

 

____________________

Norbert JASTROCH

eMail norbert.jastroch@metcommunications.de

___________________

*Cf. Jastroch, N.: Applied Artificial Intelligence: Risk Mitigation Matters. https://doi.org/10.1007/978-3-030-94335-6_20

 

governance image
© thinkstock
Tagy
Artificial Intelligence Applied AI Risk Governance Data Quality and Privacy machine learning