The final push is on and we need your help. This short blog post was my first priority on returning to my desk following my summer vacation, as I seek to reach out to members of the European AI Alliance to ask for your help in making a final push to ensure that we maximise the impact and utility of our Ethics Guidelines for Trustworthy AI, the first deliverable of the High-Level Expert Group on AI (AI HLEG), which we published in April of this year.
In that deliverable we defined seven key requirements that AI systems must satisfy in order to be regarded as trustworthy. To help organisations, developers, and other key stakeholders to operationalise the ethics guidelines and, in particular, the seven key requirements, we proposed an assessment list approach and proposed a draft of such a list. This assessment list is a critical tool to operationalise the ethics guidelines and ensure that the seven key requirements are satisfied in practice. It is extremely important that we get this assessment list right. Therefore, we are currently seeking your help in ensuring that we get as much feedback as possible through a multi-faceted piloting process that we have set in motion.
All stakeholders are invited to test the assessment list we released in April and provide practical feedback on how it can be improved through the three different tracks outlined below. We are especially interested in understanding how an assessment list approach can be implemented in organisations as well as understanding how it should be tailored in domain-specific or application-specific ways.
Feedback will be received through three different tracks:
- We have opened an online survey to help us undertake a “quantitative analysis” of the assessment list, which must be completed before December 1st 2019 by all those who register to contribute to the piloting process through this link.
- A team from the AI4EU AI-on-demand Platform consortium has been tasked with undertaking a series of in-depth interviews with a selected set of representative organisations to gather more detailed feedback for different sectors. These representative organisations have been selected from a cohort of companies who registered their interest in the process earlier in the year. These organisations are being contacted at the moment so that dates can be scheduled for these one-to-one deep dives.
- Organisations are also invited to submit feedback on the assessment list and best practices for Trustworthy AI for consideration by the AI HLEG via the AI Alliance Best Practices Tab. Feedback under this mechanism can be in any form such as specific reflections on the assessment list to very formal analysis and reviews.
The entire piloting phase, that was launched on the 26th of June, will run until the 1st of December 2019.
Alliance members are in an excellent position to contribute directly to one of these three mechanisms. The process is open and all feedback is strongly welcomed and encouraged. Please consider providing input through online survey or the best practices. This is a very important opportunity to influence how Trustworthy AI is assessed in practice in Europe.
A similarly valuable contribution would be to share this opportunity within your national network in order to ensure that the views of organisations from all EU countries are well-represented. Only by having pan-European coverage from organizations of different sizes and from a variety of sectors can we hope to develop a truly representative and robust assessment list for Trustworthy AI.
In reply to Link "7 Key Requirements for by Per Berggreen
In reply to This is an excellent update, by Jana Novohradska