Together with the Danish Business Authority, I am personally developing the Danish equivalent of the Trustworthy AI Assessment List.
Initially, I was appointed by the government to a work group that was to provide the Danish government with recommendations on Data Ethics. As an AI specialist, the recommendation I envisioned and championed, was that AI developers were given a toolbox, that could help them develop ethical solutions:
This Data Ethical Toolbox is what I am developing now. This means that together with the Danish Business Authority, I am devising an assessment list targeting AI developers. We are putting together a scoring system inspired by the Canadian model, where the AI solution will be scored on the parameters (or main focus areas): "Bias", "Personal Data" and "Transparency/Explainability", targeting AI-developers specifically.
In short, the questions/answers will lead to a profile and actual recommendations on what to do in practice.
I just completed the Survey on the Trustworthy AI Assessment List 2.0 (Developer and deployer questionnaire) and it seemed apparent that the assessment list tries to address all issues regarding all levels of an AI solution being developed and
deployed. The problem with this, is that it has to address technical and non-technical project participants in the same wording and context. I believe a version focused on the Programmers and developers is needed.
Programmers and developers are the people actually shaping the solutions and much of their ethical fabric
My own experiences as a AI project manager, AI-Architect and AI-developer has taught me that guidelines and good intentions from a managerial perspective are easily diluted and have little impact. It is better utilized as tools and references aimed at the people actually making these AI- and data-driven solutions that we keep talking about. I would love to engage in further discussions on I see this being done, so feel free to reach out.
In reply to hi, i would also love to by faiz ikramulla