Working together for an uptake of trustworthy AI: The example of the Belgian AI Week

I had the pleasure to open the 4th Day of the Belgian Artificial Intelligence Week this morning, from my office in DG Connect. It was one of the few times I came to the office in quite some months; corridors were empty, but the digital streams were vibrant with life and brilliant ideas.

Living in times of a global pandemic and with almost exclusive online interaction, initiatives for the promotion and uptake of AI make me look towards the future with optimism. The Belgian Artificial Intelligence Week is a great example of such initiative, the first of its kind in Europe.

As the Belgian Prime Minister, Alexander De Croo, mentioned in his keynote: “Today, more than ever, people and politicians understand the value of evidence-driven decisions.” Such decisions can be facilitated by artificial intelligence and the use of data. “New possibilities and needs offer the most fertile ground to speak about artificial intelligence”.

AI is not a vaccine; it will not make the pandemic disappear. However, it has helped us in many different ways during this health crisis: AI has supported the hard work of many radiologists, helping them to make quicker diagnosis and prognosis of COVID19. AI was used to carry out huge amounts of computations, to predict the molecules that were more likely to fight the virus through vaccines and drugs. AI has also helped us to analyse the data of Europeans’ mobility during the first wave and predict the spread of the virus.  

As a supremely versatile technology, AI can bring innovation to a vast set of applications ranging from manufacturing and agriculture to healthcare, energy and many others. This is why we expect the importance of AI technology to increase exponentially over time, making our daily lives much easier and improving our economies.

However, the application of AI can also bring risks.

Last summer, in a European country, AI was used to assess the performance of students. Given the restrictions imposed by the pandemic, examinations had to be taken online. The algorithm used in the evaluation process, was programmed to track not only grades, but also the general record of the school. This had an impact on good students from low-income neighbourhoods who, despite their high performance, were not evaluated as highly as students from other schools. This example shows that AI is useful, but if the data used in the training are biased, badly chosen, or not properly representative, they can create discrimination.

Other examples concern safety implications caused by AI used in self-driving cars, which will soon be on our roads, or AI embedded in collaborative robots that interact with humans, in operating rooms or on the factory floor.  These systems have to perform at the highest possible safety standards.

The EU, together with its Member States, aims to harness the full potential of AI, while creating safeguards to address the potential risks. This can be achieved through key actions and investment, clear and balanced rules, and availability of high quality datasets.

But investment is not enough. Citizens and businesses will not use AI unless they trust it. As President Von der Leyen said in the State of the Union speech, in September 2020, ‘Algorithms must not be a black box and there must be clear rules if something goes wrong.’ That’s why the Commission is working on rules to enhance transparency and traceability of AI systems. In doing so, one must keep in mind that the rules need to be proportionate and do not stifle innovation. Therefore, as anticipated in our White Paper, there should be rules specifically targeting high-risk applications. They should include audits and checks to be put in place before the AI systems can be used in Europe. The obligations will be inspired by the principles proposed by the High-Level Expert Group on AI in their Ethics Guidelines on Trustworthy AI. They aim among others at ensuring that datasets are representative and of high quality, that AI systems are properly documented, appropriate information is provided to the user, that systems are accurate and robust, and that human oversight is foreseen, including by design.

During the event, I presented these aspects, which we set out in some detail in our White Paper on AI Excellence and Trust. This work will now be followed up by the revised Coordinated Plan with Member States as well as by our proposal for a regulatory framework next month.

You can watch the recordings of my intervention here.

Join the Belgian AI’s week “European AI Day”

During the previous 4 days and with the full participation of public partners, regions and the federal level, Belgian citizens (experts or not) where involved in over 60 discussions. The discussions focused on the concrete benefits of AI in everyday life, ethical and responsible AI as well as the business and research ecosystem being built in the country. The event will close today, with discussions on the role of Europe in the global AI landscape, under the working title “European AI Day”.

You can register and follow the event via this link.