The Alan Turing Institute publishes a comprehensive guide for the responsible design and implementation of AI systems in the public sector

understanding_artificial_intelligence_ethics_and_safety.pdf
(0 byte - PDF)
Scarica

The Alan Turing Institute, the UK's national institute for data science and AI, recently published Understanding Artificial Intelligence Ethics and Safety: A guide for the responsible design and implementation of AI systems in the public sector - a comprehensive guidance for ethics and safety in the development and implementation of AI tools in the public sector. The document includes a framework of commonly accepted values and principles, as well as concrete guidance as to how to apply these in practice. 

The framework was adopted as part of the Office for AI’s Guide to using artificial intelligence in the public sector and is intended to provide support to those involved in the creation and use of AI tools in the public sector from the earliest stages of conception of an AI project up until its use, review, and maintenance. 

The framework is among the most (if not the most) comprehensive and operationalisable guidance on digital ethics and how ethical values find an expression throughout the design and development of AI tools, as well as in their implementation, use, review and maintenance. This framework can form the basis of an in-depth and practice-focused conversation for digital ethics in the public sector in Europe and beyond.

 

A summary of the framework and its proposed three-tiered approach is available below. 

First, SUM Values are values that Support, Underwrite, and Motivate (SUM) a responsible innovation ecosystem. The role of the SUM Values is to be utilised as guiding values throughout the innovation lifecycle and provide an accessible framework of ethical criteria for considering and deliberating the ethical permissibility of a prospective AI project and its ethical impacts. The SUM Values are:

  • Respect the dignity of individual persons (reflecting the values of autonomy and authority of persons, their self-realisation and flourishing)

  • Connect with each other sincerely, openly, & inclusively (encompassing the integrity of the interpersonal relationship, as well as participation-based innovation and stakeholder inclusion)

  • Care for the wellbeing of each and all (representing the values of beneficence, safety, and non-harm, as well as stewardship of the biosphere)

  • Protect the priorities of social values, justice, and the public interest (capturing justice and prioritisation of the public interest and common good)

 

Second, FAST Track Principles are principles that facilitate an actionable orientation to the ethical design and use of AI systems. Since AI systems themselves cannot be held directly responsible nor immediately accountable for the consequences of their behaviour, the FAST Track Principles help those developing and using algorithmic systems to apply ethical standards in practice throughout the innovation process by being operationalisable. The FAST Track Principles cover Fairness, Accountability, Sustainability, and Transparency. 

The principles of fairness and sustainability are qualities of algorithmic systems, crucial aspects of the design, implementation, and outcomes of such systems. They establish normative criteria against which designers and implementers can be held accountable through mechanisms of transparency. 

  • According to the principle of fairness, designers and implementers are held accountable for being equitable and for not harming anyone through bias or discrimination. 

  • According to the principle of sustainability, designers and implementers are held accountable for producing AI innovation that is safe and ethical in its outcomes and wider impacts.

The principles of accountability and transparency are end-to-end governing principles. They provide procedural mechanisms and means through which AI systems can be justified and by which their producers and implementers can be held responsible. 

  • Accountability entails that humans are answerable for the parts they play across the entire AI design and implementation workflow. It also demands that the results of this work are traceable from start to finish. 
  • The principle of transparency entails that design and implementation processes are justifiable through and through. It demands as well that an algorithmically influenced outcome is interpretable and made understandable to affected parties.

 

Finally, a Process-Based Governance Framework (PBG Framework) is a third component of the framework. The PBG Framework will encompass both technical and non-technical tools (such as processes, procedures, guidelines, records), that will help AI developers and intended users put the ethical values and principles in practice and strike and record appropriate balances and trade-offs as appropriate. Tailoring of the tools of the PBG Framework to the particular features of sectors within which the use of AI tools is considered (e.g. criminal justice, housing benefit allocation, tax fraud identification) will be necessary to strike the appropriate balances in this regard. 

 

Tag
UK TheAlanTuringInstitute ethicsinpractice AI Artificial Intelligence digitalethics Ethics guidelines aiguidelines