AI HLEG - Definition of AI

This is the definition that the High-Level Expert Group on AI (AI HLEG) proposes for the purpose of its deliverables. It accompanies the draft AI Ethics Guidelines wich are now subject to a Stakeholder Consultation.

Tráchtanna

User
Curtha isteach ag Anders Arpteg an Wed, 26/12/2018 - 12:05

Is it possible to provide feedback about this definition of AI, similar to the stakeholder consultation form for the ethics guidelines document? Great that we try to find a common definition of AI, but believe some formulations in suggest definition could be improved.

Profile picture for user nsmuha
Curtha isteach ag Nathalie Smuha an Thu, 27/12/2018 - 13:26

Hi Anders,

As the purpose of this document is to define "AI" merely for the purpose of the AI HLEG's deliverables, no consultation similar to the one on the draft Ethics Guidelines is foreseen. Feedback is, however, always useful, so we would certainly wecome your suggestions for improvement - please feel free to post these here below - and we will ensure that the AI HLEG drafters of the definition will take this into consideration.

User
Curtha isteach ag Richard Krajčoviech an Sat, 29/12/2018 - 21:12

Because of blurry border between AI and non-AI, it might be helpful to include in the definition a section defining, what is not considered AI. E.g. is usage of trained neural network (with fixed coefficients) an artificial intelligence, or is it an algorithm created by an AI approach?

User
Curtha isteach ag Richard Krajčoviech an Tue, 08/01/2019 - 10:36

Two more examples, which fall within the proposed definiton of AI, but might not be considered AI applications:

1. PID controller (e.g. a thermostat) is sensing environment (temperature), makes decisions, even considering the past responses, which mimics learning and memory, and acts by affecting an actuator (heating system) to achieve specific goal (preset temperature).

2. Optical Character Recognition - there are opinions that OCR is not considcered AI anymore

 

User
Curtha isteach ag Richard Krajčoviech an Wed, 09/01/2019 - 07:38

I think there are (at least) two competing views on what AI is:

1. Marketing view (both in science and business), which wants to have AI label on nearly each product, which is legitimate. My understanding is that the guidelines do not have ambition to regulate usage of AI term.

2. The ethics guidelines view, which should focus on the technologies and approahces that bear significant or new risks compared to traditional computer systems. Under traditional coumputer system I understand a system, which is designed, developed and tested by well undestood techniques and approaches to provide

- reasonable predictability of its bahaviour,

- defined reliability in its operation

- etc.

so the developers, owners and users can UNDERSTAND and share responsibility for its actions.

We probably want to keep the guidelines acceptable for business and limit regulation (and business promises) to the risky areas, so I propose to focus the guidelines mainly on those AI techniques and approaches, which are risky (in line with the bold goal in the executive summary) and which are not "traditional" in this sense. This might help with some disputes around the guidelines.