Ethics guidelines as a means of delivering technology that people can trust

As made clear by our principles for Trust and Transparency, IBM has always understood that we need to lead by demonstrating responsible adoption and use of new technologies that we develop and bring to the world. That is why we are a big supporter of the EU’s work to develop Ethics Guidelines for Trustworthy AI, now nearing completion. The guidelines will cement Europe’s position as a global pioneer for artificial intelligence that is developed and deployed responsibly and ethically.

I have the privilege of being a member of the High Level Expert Group on AI  selected by the European Commission and mandated to develop ethics, policy and investment recommendations for AI. Work is ongoing in all three areas. Since last summer we have been working on the ethics guidelines, we have gathered the components that are prerequisite for trustworthy AI, and have also considered the most effective ways to make trustworthy AI operational.  We have also defined the two components of Trustworthy AI: 1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose,” and 2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

The EU’s process to develop the guidelines has been remarkably consultative. Far from being observers or occasional advisors to a European Commission drafting process, the Expert Group was empowered to be hands-on in drafting each section of the text. In a second consultative phase, the draft guidelines are now open for feedback from the EU AI Alliance, a broad forum to which anyone with an interest in open discussion around artificial intelligence development can apply and provide comments.

The EU has also ensured a multidisciplinary approach – the expert group and the AI Alliance reflects many backgrounds and perspectives across multiple academic fields, industries, and human/consumer rights associations. AI is no longer the sole domain of tech researchers but extends to philosophers, psychologists, sociologists, economists and more. For AI to be human-centric and as beneficial as possible for as many people as possible, multidisciplinary discussions about its societal implications are essential.

The choice of focusing on trustworthy AI is closely aligned with IBM’s approach to advance and deliver a technology that people can trust, and to shape design, development, and deployment around transparency and responsibility. We also chose to start from fundamental rights, linking them to essential principles and values, and then using the principles and values to define requirements for trustworthy AI.

The guidelines recognize that there is no “one-size-fits-all” solution to AI ethics. Different situations raise different challenges – importantly, the final guidelines will include use-cases, demonstrating how the guidelines can be applied in different AI contexts.

During the development process, I shared IBM’s experience of bringing AI ethics from theory to reality, that is, from research to concrete products and services. For example, the draft guidelines talk about the principle of justice and fairness. Last year, IBM developed an AI Fairness 360 toolkit and made it available to the open source community. The toolkit is a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection and mitigation as they build and deploy machine learning models. Explainability is also crucial for building trustworthy AI.  Last year, IBM launched Trust and Transparency capabilities for AI on the IBM Cloud that provide visibility into how AI is making decisions and give recommendations on how to mitigate any potentially damaging bias.

Finally, good AI Governance is a key element of the guidelines. IBM has procedures in place to ensure that our business decisions reflect our values. We have established internal  mechanisms to coordinate and synchronize AI ethics initiatives across all divisions of the company.

We are now entering the final – and possibly toughest – phase of bringing the guidelines to fruition. Feedback on the draft has been flowing in from the broader AI Alliance community. I look forward to seeing the valuable perspectives this feedback brings, and to further strengthening key elements such as the assessment processes for trustworthy AI. Over the next few months, my colleagues from the Expert Group and I will work together with the European Commission to put in place final guidelines that are ambitious in vision, yet pragmatic and impactful.

With a recent IBM Institute for Business Value study of 5,000 C-level executives showing that 60 percent are hindered from implementing AI by concerns over trust and compliance, the guidelines couldn’t be more timely. It is my firm belief that trustworthy and  inclusive AI systems are and will be the most competitive and the most successful.


jobs skills innovation investment research Legal society Business AI Ethics IBM uptake


Profile picture for user n002hmta
Objavio Nicolas Beaume sub, 19/01/2019 - 12:24

I read with great interest the working version of AI HLEG and totally agree with the core ideas of the group. Having trustworthly AI is in line with the fundamental values of Europe and most probably the best way to get citizen trust and being effectivelly the tech leap we want to make.

It is interessting to know that some companies already have though about it and developp a policy around ethical AI.

To ensure that Europe developp a policy that fit its core values and the will of it citizens, shouldn't we look at ethical policies of more than one compagnies and especially try to include european compagnies ? I am convienced that IBM is doing a great job but multiple models brings multiples point of view which is highly desirable for an important topic such as this.

Objavio Kristof Kloeckner pon, 21/01/2019 - 17:03

I like that the document is built on foundational principles of human rights and that it references related efforts for establishing 'ethical purpose' for other technologies. While I agree with the description of risks and the need for robust processes and tests, I think the risk of a proliferation of 'low quality' AI-based applications is maybe the biggest threat to successful and beneficial adoption.

There is clearly a responsibility of professional bodies like BCS in the UK or GI in Germany to (help) ensure that practioners are well qualified. I believe well-defined skill profiles and perhaps even certification (like for civil engineers or architects) should be considered. I would be interested see if any of those communities are commenting on the guidelines.