Embedding trust into every facet of AI - Principles designed to foster confidence

The first step in minimizing the risks of AI is to promote awareness of them at the executive level as well as among the designers, architects and developers of the AI systems that the organization aims to deploy.

Then, the organization must commit to proactively design trust into every facet of the AI system from day one. This trust should extend to the strategic purpose of the system, the integrity of data collection and management, the governance of model training and the rigor of techniques used to monitor system and algorithmic performance.

Adopting a set of core principles to guide AI-related design, decisions, investments and future innovations will help organizations cultivate the necessary confidence and discipline as these technologies evolve.

Remember, AI is constantly changing, both in how organizations use it AND how it evolves and learns once it is operating. That continuous innovation is exciting and will undoubtedly yield tremendous new capacities and impacts, but conventional governance principles are simply insufficient to cope with AI’s high stakes and its rapid pace of evolution. These twin challenges require a more rigorous approach to governing how organizations can harness AI for the best outcomes, now and in the future.

In our ongoing dialogues with clients, regulators and academia — as well as in our experience in developing early uses and risk assessments for AI initiatives — we have observed three core principles that can help guide AI innovation in a way that builds and sustains trust:

  1. Purposeful design: Design and build systems that purposefully integrate the right balance of robotic, intelligent and autonomous capabilities to advance well-defined business goals, mindful of context, constraints, readiness and risks.
  2. Agile governance: Track emergent issues across social, regulatory, reputational and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing and management, model training and monitoring.
  3. Vigilant supervision: Continuously fine-tune, curate and monitor systems to achieve reliability in performance, identify and remediate bias, promote transparency and inclusiveness.

What makes these principles specific to AI? It’s the qualifiers in each one: purposeful, agile and vigilant. These characteristics address the unique facets of AI that can pose the greatest challenges. 

For example, the use of AI in historically “human-only” areas is challenging the conventional design process. After all, the whole point of AI is to incorporate and, in effect, emulate a human decision framework, including considerations for laws, ethics, social norms and corporate values that humans apply (and trade off) all the time. These unique expectations demand that organizations adopt a more purposeful approach to design that will enable the advantages of AI’s autonomy while mitigating its risks.

Similarly, as the technologies and applications of AI are evolving at breakneck speed, governance must be sufficiently agile to keep pace with its expanding capabilities and potential impacts. And lastly, while all new innovations thrive with monitoring and supervision, the sheer stakes at play, plus the ongoing, dynamic “learning” nature of AI (which means it continues to change after it has been put in place) require more vigilance than organizations have typically adopted.

With these guiding principles at the core, the organization can then move purposefully to assess each AI project against a series of conditions or criteria. Evaluating each AI project against these conditions, which extend beyond those used for legacy technology, brings much-needed discipline to the process of considering the broader contexts and potential impacts of AI.

Republished from ey.com, 3 September 2018

Тагове
trust governance design AI Supervision

Забележки

Profile picture for user nmieradr
Изпратено от Andreas MAIER на Вт., 19/03/2019 - 16:39

There is an interesting paper that discusses the integration of known operations into deep networks. I think this might also be an opportunity to make AI safer and interpretable:

https://arxiv.org/abs/1712.00374

What do you think about this approach?

In reply to by Andreas MAIER

Profile picture for user n002hgqd
Изпратено от Maikel Groenewoud на Съб., 23/03/2019 - 16:01

I find the approach described in the paper quite interesting. It does seem sensible to at least explore the option of including prior knowledge in domains where a lot of prior constraints are known. This certainly could add to the explainability of the model and interpretability of the results, making it somewhat easier to convey why it arrives at certain results. It's quite promising that the predictive accuracy improved in the study. I can also see how including prior knowledge could potentially lead to a neural network containing less layers, which could also add to the explainability and interpretability.