by Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at CEPS and member of the AI HLG
After its official launch on June 18, the CEPS Task Force on AI met on 9 July to discuss aspects of ethics as well as future investment in various AI sectors. We had a stellar lineup including Vince Conitzer (Duke), Michal Pechouchek (Czech Technical University), Koen Simoens (Sentiance), and two fellow members of the Commission High Level Group on AI: Virginia Dignum (TU Delft) and Luciano Floridi (Oxford Internet Institute). As the coordinator of the Task Force, I opened, moderated and then closed the day by proposing an outline for a draft report that should see the light by the end of the year, in time to contribute to the future deliverables of the Commission High Level Group.
Vince Conitzer opened the day with a marvellous presentation on “moral AI”, in particular drawing on his recent work on algorithms used to identify beneficiaries in kidney exchanges. Should younger beneficiaries have priority over older ones? Should other health conditions matter? Should drinking or eating habits be considered? And what if the majority of citizens argue that nationality or ethnicity should also be considered? A concrete example of the challenges in aligning AI with human values, and the difficulties in identifying which values, which criteria to use, which stakeholders to involve, and how to strike the balance between top-down, technocratic definition of values behind algorithms used in critical sectors such as healthcare; and the need to reflect citizens’ preferences and values when delegating decisions to algorithms. A “technocracy v. democracy” tension that echoes similar trends in other policy domains, and that is likely to remain pivotal in the future.
Michal Pechouchek proposed a very interesting definition of AI, intended as a family of technologies and scientific field that allows/studies automation, acceleration and extreme scalability of human perception, decision making and reasoning. His enlightened perspective, fuelled by years of experience in the private sector and academia in the US and in the EU, pointed at successes, use cases and emerging trends in AI, including high performance machine learning in privacy preserving datasets, learning Actionable Intelligence, Artificial Explainable Intelligence, AI together with Complex Physical Principles, Adversarial Machine Learning, and more. He argued that Europe can play a key role in the global “race for AI”, but should not pretend to lead in all fields. An effective strategy should start from significant support to academic activities, much better career prospects for AI researchers, a focus on areas such as healthcare, manufacturing and transportation, playing catch-up on defence and cybersecurity, and embracing, rather than resisting, job automation by promoting high income jobs and their related skills.
Koen Simoens explained how Sentiance, a Belgian company with global presence, uses big data and AI to process data from sensors to build user profiles it then provides to its corporate and institutional customers. User habits and movements are tracked to improve people’s choices, such as remembering to take a medicine or receiving fitness-related advice. Sentiance sees the GDPR as an opportunity, and rather than advocating deregulation, expressed a preference to the creation of a level playing field through a coherent, consistent regulatory framework.
Virgina Dignum led us into the world of “Responsible AI”, with key reflections for the future work of the High-Level Group on AI, subgroup on Ethics. She asked whether the EU really needs yet another set of ethical guidelines, or rather “Meta” Guidelines, which describe how to ensure Ethical AI. Her preference goes to the latter: this requires taking and allocating responsibility for how we design, implement and use AI. More specifically, it requires (i) ethics in design (ensuring that development processes are aligned with ethical principles, considering the ethical implications of AI as it integrates and replaces traditional systems and social structures); (ii) ethics by design (integrating ethical reasoning abilities as part of the behaviour of artificial autonomous systems; and (iii) ethics for design(ers) (research integrity of researchers and manufacturers as they design, construct, use and manage AI systems). She concluded that optimal AI is explainable AI; and that ethical values should become a basis for innovation, and should be possibly embedded in future certification and regulation.
Finally, Luciano Floridi provided an ethical perspective on AI with insights from several recent publications, including one on the grand challenges of science robotics. He advocated making AI environment-friendly and human-friendly; illustrated ways of making stupidity work for intelligence; make predictability work for self-determination; and make technology make us more human. In dwelling the interrelations between governance, ethics and regulations, he introduced us to what he called the space of “soft ethics”, constrained by human rights, compliance and feasibility and likely to expand over time. It is within that space, Luciano argued, that Ethical AI should move. Soft ethics has the dual advantage of enabling both risk management and the identification of areas of opportunity. And it should be build on the very solid basis provided by international law, in particular when it comes to human rights and sustainable development.
This was a very rewarding day, coupled with cautious optimism on Europe’s potential to acquire and preserve relevance in the global AI scene. For the High Level Group and for the AI Alliance, these experts are an important reference: the debate pointed at the need to frame the AI debate within the overall sustainable development (Agenda 2030) and human rights debate; the need to promote ethical AI by focusing on all steps of AI development (similar to the “from farm to fork” approach the EU has in other fields such as as agri-food); the need to use certification as a tool to signal and monitor compliance with EU values as a key “quality stamp” in the global context, rather than shuitting the door to international cooperation; and the need to engage in AI development together, rather than trying to win the race. As an old proverb goes, “if you want to go fast, go alone; if you want to go far, go together.”
Next meeting will be on 3 September, with experts from all over the world (in particular, Silicon Valley), and a deep dive into bio-robotics. Stay tuned!
- AI cepstaskforceAI Ethics values SDGs Agenda2030 ResponsibleAI EthicalAI Floridi Dignum Conitzer Pechouchek