Developing AI at the heart of European Rights and Freedoms

by Cecilia Bonefeld-Dahl, Director General DIGITALEUROPE and Member of the AI HLG

I have enjoyed the first meeting of the European Commission’s high-level expert group on AI and would like to share some thoughts on AI and Ethics, through this platform.

First of all, I was truly honoured to be accepted to take part of an important dialogue on how AI can contribute to our society, which is an area I am passionate about. The benefits of Artificial Intelligence are limited only by our imagination.  Healthcare and Manufacturing are two vertical sectors in Europe that will benefit greatly from human-centric AI, boosting all indicators from safety to efficiency, driving down costs to saving people’s lives.

I am both excited and humbled by the challenging task ahead of us in developing guidelines on the important ethical concerns AI technologies can pose. I do firmly believe that what we develop as a group, leveraging the great minds that make up the expert group can deliver concrete and practical guidance that can assist AI development in the EU as well as prove valuable to the international dialogue.

The AI technology today and being developed can provide advanced educational services that can adapt to people’s abilities and needs, react faster to humanitarian disasters or more importantly prevent them, and detect and investigate fraud.

I hope the benefits are obvious to you, but of course we cannot diminish nor ignore any potential dangers that the use of AI could produce. The priority for our industry is to address how these technologies will respect European values and legal standards to gain broad social acceptance on which the success of AI in Europe depends.

The General Data Protection Rules (GDPR) recently coming into force offers EU citizens the highest level of data protection and privacy that has ever existed.  Many of the largest companies developing AI technologies have built in privacy-by-design features into their core development processes.

The same applies to cybersecurity.  The technology industry has developed protocols in its engineer teams as we have understood over a decade ago that the success of our industry depends on the confidence and trust we can secure in our products and services.

There is a moral imperative that defining ethical standards is the means towards preserving our values.  Every day we work on technical and procedural solutions to achieve this.

Technology companies are listening closely to governments and civil society on issues of fairness, accountability and transparency to address concerns of bias and discrimination. To try and avoid such problems, especially in regard to implicit biases, and to ensure ethical, fair and accountable AI, many companies have set in place constant re-evaluation processes. It can detect divergences and anomalies, and quickly correct these flaws.

We believe the best way to reduce bias is through robust data quality policies that can make available high-quality and diverse data sets. The higher the quality of the input, the higher the quality of the output. This, combined with training and education of the designers and consumers of AI and algorithms, will help to prevent discrimination and to better detect and correct such issues.

We also need to have a good balance of gender, cultures, sexual orientations, ethnic backgrounds in developing AI applications or we risk our solutions to not be suitable for all people in going forward. According to Eurostat over 83% of ICT specialists in Europe are men. We need a continued focus to not only ensure computer coding is built into the education system as a core area but to also ensure it is developed in a way to be more attractive to women. Only then will we have all citizens on board in the digital era. I have been part of the secretariat for the Digital Skills and Jobs Coalition where I have urged Governments to make the necessary investments and changes in the educational systems to ensure our society as a whole is inclusive, and we work with employers to re-skill and up-skill the workforce. 

I also very much welcome the broad stakeholder engagement that is being developed in the AI Alliance to ensure that the technical and practical expertise is matched with the political, ethical and social views of Europe. This should be paired as well with an eye towards the global context, to form a robust policy framework and take a leadership role for others to follow.

I look forward to  hearing your views and to sharing ideas with you during the next meetings of the HLG.

Komentari

Profile picture for user nrzeszpa
Objavio Pawel RZESZUCINSKI čet, 05/07/2018 - 08:56

Thank you for your thoughts. Something that came to my mind as I read your post, is the necessity to work on the tools to make the non-technical citizens accept the onset of AI. I'm a data scientist myself, but it's not difficult for me to visualize people rejecting all the great things that come from AI, purely due to the fear of the unknown. This can mostly be solved by the early-stage eduction, as you mentioned, but not everyone has this scientific mindset. For these nont-technical citizens some high level abstractions could be helpful. This also applies to the elderly who will have to face the changes (e.g. talking to a chatbot in a local branch of a bank) , especially those who don't have the curiosity to stay up-to-date with the latest developments in technology.

Profile picture for user njastrno
Objavio Norbert JASTROCH čet, 05/07/2018 - 13:05

It has been a most convincing move by the EU to introduce the GDPR. We should strive to make it a global asset in the context of AI.

As it appears that AI development is much more than a mere technical issue, and taking into view who are the ones that are or will be producing AI systems, shouldn't "basic ethics" become part of the standard curricula in ICT specialist education?

In reply to by Norbert JASTROCH

Profile picture for user ngaesteb
Objavio Esteban GARCÍA… pet, 06/07/2018 - 12:40

Hi Norbert and thanks Cecilia for your post,

   I do agree on Norbert thought of making (what you called) "basic ethics" as part of the curricula in ICT education. Indeed I did include a 2 hours class about this in my subject Machine Learning at MSc. level showing some misleading real use of AI algorithms and the feedback from the students was very positive. I also believe that including it sooner in the curricula should be a must due to the importance and impact that a mislead use of AI could have for both industry and society. There are also industrial initiatives that trie to accomplish with the ethical use of AI which also shows the industrial interest in making things in the right way. Thereof, the inclusion of this in the educational part of the process would help later to be embeded into the industry as an adquired value and not as something new to be developed.

   It may be another topic but my though about how industry should acomplish with the ethical part of AI is exactly what we are presenting in these post, provide knowledge and education about AI to present or future professionals and aldo by implementing at AI teams these policies in its day by day worklife.

      Best,

             Esteban.

Profile picture for user n0025dc0
Objavio Andres Abad Ro… pon, 09/07/2018 - 22:05

Very interesting post. It would be also necessary to address how the AI developments are perceived and used by elder people.

Introducing AI in some public services can make a difference for good or for bad depending on the interface provided to the user and the trustability the user has on the results. It is not only providing solutions but to create a trust relationship with the receiver of such solution.

Profile picture for user n0029cz9
Objavio Malay Upadhyay uto, 17/07/2018 - 00:21

Dear Cecilia,

Thank you for sharing your well-rounded thoughts. A few points cropped up as I was reading your post that I have encountered in my time spreading AI awareness among the corporate audience:

1. The decision makers - Those involved in producing and adopting AI solutions - corporate leaders and board members in organizations - are faced with issues on two fronts:

a) A lack of understanding of what AI really is or how it works. This has an impact on two fronts: 

First, while AI or Machine Learning consists of many different tools, the use of one or few in a solution can technically define it as AI-enabled. However, that doesn't make it effective. Buyers often find it difficult to distinguish between "narrow" and "deep" AI solutions, which can then lead to disappointment in how ineffective AI is to the business.

Second, as a result of this, the disillusioned boardroom executives have come to perceive AI as more of a buzzword still in its early stages of the Gartner technology hype cycle.

b) The second issue comes from a lack of process knowledge in deploying and maintaining AI. In working with some of the big banks, I have found that most offices are moving to purchase AI solution and then trying to figure out how to best use it, instead of first deciding on the problem they wish to solve and then looking for a suitable AI tool. Moreover, maintaining the use of AI in the organization by employees requires specific managerial best practices without which the deployment cannot be sustained or is ineffective.

2. Data - I applaud your point on the need for robust, high quality data, and the need to make AI unbiased. As you already know, these are closely related. Even the most meticulously designed AI today will incorrectly predict that the men are likely to earn more than women, simply due to the skewed salary data at its hands since men have traditionally been the earning workforce across the world for so much of our modern history. While that may be more measurable, we need diversity in building AI also to also lend virtues like empathy and social sensitivity into its decision-making algorithm. To your point, you may find initiatives to teach girls to code such as that of Doina Oncel at hEr VOLUTION or of CoderGirl of interest. However, the problem is not so much to get women to pursue careers in STEM as to keep them in it. For instance, it was observed that women can be more resistant to a sustained life of coding due to the loneliness and lack of social outreach that often accompany the work of a professional coder over time. 

Cecilia, I hope this comment lends further depth to ground realities of challenges that are being faced in addressing the problem of AI funding, adoption, data and inclusion. I will be happy to clarify on any point and certainly, excitedly, look forward to our contributions in creating an ethical and effective AI landscape. Thank you once again, and wish you a great day ahead.

 

Regards,

Malay

In reply to by Malay Upadhyay

Profile picture for user quinta
Objavio Stefano Quintarelli uto, 17/07/2018 - 10:45

I believe the issue of unbiased data in many contexts may be inextricable, as socially desirable society behaviors, collectively devised, often differ from actual behaviors of the groups of individuals.