The Problem with a "one size fits all" Assessment List

Together with the Danish Business Authority, I am personally developing the Danish equivalent of the Trustworthy AI Assessment List.

Initially, I was appointed by the government to a work group that was to provide the Danish government with recommendations on Data Ethics. As an AI specialist, the recommendation I envisioned and championed, was that AI developers were given a toolbox, that could help them develop ethical solutions: 

This Data Ethical Toolbox is what I am developing now. This means that together with the Danish Business Authority, I am devising an assessment list targeting AI developers. We are putting together a scoring system inspired by the Canadian model, where the AI solution will be scored on the parameters (or main focus areas): "Bias", "Personal Data" and "Transparency/Explainability", targeting AI-developers specifically. 

In short, the questions/answers will lead to a profile and actual recommendations on what to do in practice.

I just completed the Survey on the Trustworthy AI Assessment List 2.0 (Developer and deployer questionnaire) and it seemed apparent that the assessment list tries to address all issues regarding all levels of an AI solution being developed and
deployed. The problem with this, is that it has to address technical and non-technical project participants in the same wording and context. I believe a version focused on the Programmers and developers is needed.

Programmers and developers are the people actually shaping the solutions and much of their ethical fabric 

My own experiences as a AI project manager, AI-Architect and AI-developer has taught me that guidelines and good intentions from a managerial perspective are easily diluted and have little impact. It is better utilized as tools and references aimed at the people actually making these AI- and data-driven solutions that we keep talking about. I would love to engage in further discussions on I see this being done, so feel free to reach out.

AI Dataethics inpractice


Profile picture for user n002yhrz
Predložil faiz ikramulla dňa Št, 31/10/2019 - 16:22

hi, i would also love to engage in further discussion on your last paragraph: "Programmers and developers are the people actually shaping the solutions and much of their ethical fabric".  ethics can easily be overlooked, forgotten, or mistakes made, yet solutions build on these oversights are deployed and potentially (at least slightly) flawed as a result.  how can oversight and guidelines play into the AI development world, from a technical perspective?  it is one of my interest area, and i am interesting in discussing and learning from others experiences.

In reply to by faiz ikramulla

Profile picture for user n002u7st
Predložil Erik David Johnson dňa Št, 31/10/2019 - 16:30

I am happy to - I sent you a message

Profile picture for user n002he7c
Predložil Maximilian J. … dňa St, 06/11/2019 - 16:37

Dear Erik,

I deeply share your concerns regarding the EU's assesment list. I also see the difficulty with one-size-fits-all solutions; especially when it comes to ethical standards which vary from place to place and from time to time. In general I am not a big fan of "operationalizing" ethical behaviour into single action recommendations. It bears the danger of losing real ethical considerations out of sight and only focussing on ticking checklists. If a list of requirements is needed, then this list should focus on the design process and not the product. It should encourage the responsible designer to continuously check for prossible effects and side effects of the AI algorithm. An AI algorithm cannot act ethical itself, only an AI designer can. Like you said: "Programmers and developers are the people actually shaping the solutions and much of their ethical fabric"

We formulated our own principles as an example for this shift of perspective.

A few days ago the US DoD also released ethical guidelines that are based on individual responsibility and a sense of ownership.…