As made clear by our principles for Trust and Transparency, IBM has always understood that we need to lead by demonstrating responsible adoption and use of new technologies that we develop and bring to the world. That is why we are a big supporter of the EU’s work to develop Ethics Guidelines for Trustworthy AI, now nearing completion. The guidelines will cement Europe’s position as a global pioneer for artificial intelligence that is developed and deployed responsibly and ethically.
I have the privilege of being a member of the High Level Expert Group on AI selected by the European Commission and mandated to develop ethics, policy and investment recommendations for AI. Work is ongoing in all three areas. Since last summer we have been working on the ethics guidelines, we have gathered the components that are prerequisite for trustworthy AI, and have also considered the most effective ways to make trustworthy AI operational. We have also defined the two components of Trustworthy AI: 1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose,” and 2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
The EU’s process to develop the guidelines has been remarkably consultative. Far from being observers or occasional advisors to a European Commission drafting process, the Expert Group was empowered to be hands-on in drafting each section of the text. In a second consultative phase, the draft guidelines are now open for feedback from the EU AI Alliance, a broad forum to which anyone with an interest in open discussion around artificial intelligence development can apply and provide comments.
The EU has also ensured a multidisciplinary approach – the expert group and the AI Alliance reflects many backgrounds and perspectives across multiple academic fields, industries, and human/consumer rights associations. AI is no longer the sole domain of tech researchers but extends to philosophers, psychologists, sociologists, economists and more. For AI to be human-centric and as beneficial as possible for as many people as possible, multidisciplinary discussions about its societal implications are essential.
The choice of focusing on trustworthy AI is closely aligned with IBM’s approach to advance and deliver a technology that people can trust, and to shape design, development, and deployment around transparency and responsibility. We also chose to start from fundamental rights, linking them to essential principles and values, and then using the principles and values to define requirements for trustworthy AI.
The guidelines recognize that there is no “one-size-fits-all” solution to AI ethics. Different situations raise different challenges – importantly, the final guidelines will include use-cases, demonstrating how the guidelines can be applied in different AI contexts.
During the development process, I shared IBM’s experience of bringing AI ethics from theory to reality, that is, from research to concrete products and services. For example, the draft guidelines talk about the principle of justice and fairness. Last year, IBM developed an AI Fairness 360 toolkit and made it available to the open source community. The toolkit is a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection and mitigation as they build and deploy machine learning models. Explainability is also crucial for building trustworthy AI. Last year, IBM launched Trust and Transparency capabilities for AI on the IBM Cloud that provide visibility into how AI is making decisions and give recommendations on how to mitigate any potentially damaging bias.
Finally, good AI Governance is a key element of the guidelines. IBM has procedures in place to ensure that our business decisions reflect our values. We have established internal mechanisms to coordinate and synchronize AI ethics initiatives across all divisions of the company.
We are now entering the final – and possibly toughest – phase of bringing the guidelines to fruition. Feedback on the draft has been flowing in from the broader AI Alliance community. I look forward to seeing the valuable perspectives this feedback brings, and to further strengthening key elements such as the assessment processes for trustworthy AI. Over the next few months, my colleagues from the Expert Group and I will work together with the European Commission to put in place final guidelines that are ambitious in vision, yet pragmatic and impactful.
With a recent IBM Institute for Business Value study of 5,000 C-level executives showing that 60 percent are hindered from implementing AI by concerns over trust and compliance, the guidelines couldn’t be more timely. It is my firm belief that trustworthy and inclusive AI systems are and will be the most competitive and the most successful.