Addressing Ethical Dilemmas in AI:Listening to Engineers

“Ditch the algorithm!” became one of the battle cries of 2020, encapsulating the fear that humanity is being subordinated to technology[1]. At the same time, the rapid development of artificial intelligence (AI) systems is increasingly seen as a solution for making ever more life-defining decisions be it on the climate crisis, your bank credit scores, or roll-out of vaccinations. AI systems bring immense opportunities as well as considerable ethical concerns.

But who’s fault is it when AI discriminates against someone? Who is to blame, when a computer makes an unethical decision? Who is at the end responsible for making sure that AI does no evil?

Engineers across the world face these questions every day in their role as designers of the systems that are used to make cheaper and faster decisions in an increasingly automated society.

In September 2020 similar questions were put to engineers from across Europe and beyond in a hackathon organised by the Association of Nordic Engineers (ANE), the Data Ethics ThinkDoTank (DataEthics.eu), the Institute of Electrical and Electronics Engineers (IEEE) and researchers from the Department of Computer Science at the University of Copenhagen. The organisations came together in international collaboration because ethical challenges in AI extend beyond borders and must be addressed not only at a national level but also at the regional and international level.

The results of the discussions at the hackathon are published today in the report “Addressing Ethical Dilemmas in AI: Listening to Engineers”. The report voices engineers’ concerns and presents their recommendations. Engineers assert that they are ready to take responsibility for tackling ethical dilemmas in AI, but they can’t shoulder the responsibility alone.

These discussions are not new, but if we are to use the potential of AI systems to automatically make our lives better then we need to be able to trust the algorithms, the system owners and the developers. To build this trust, engineers are calling for guidance, governance and space to voice concerns and to sustain discussions. Here are the main recommendations from the report:

  • Put in place a governance framework and define and distribute the responsibilities,
  • Engage with stakeholders during system design,
  • Develop and share standards and best practices for addressing ethical challenges,
  • Create spaces to help engineers identify, discuss and deal with ethical issues, 
  • Create new and better technical documentation and certification, and
  • Push for explainability and testing of AI systems before system launch and throughout the system life cycle.

These ethical challenges can only be addressed through collaboration. To secure ethical AI development there is a need for a public debate and urgent legislations about frameworks for governance, responsibility, and security. Engineers and other AI professionals should be able to rely on their professional organisations and trade unions to create spaces for discussion and shared deliberation. 

If we don’t take these steps, who will then be to blame when AI makes the wrong decisions?

It is time to take responsibility for the algorithms. Together.

[1] The Guardian, 19/08/2020: https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-student

Oznake
ai ethics

Komentarji