[Discussion] Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence

The recommendations.

Main keys:

1) mandatory classification of AI systems according to risk of damage (category 5);

2) a mandatory category designation scheme and risk assessment;

3) the obligation for companies to establish a contact (understand “algorithms”) position similar to that of the Data Protection Officer under BDAR.

 

------------

Questions:

Who would manage and would rate such systems?



For example city decides to implement smart system at the crossings for traffic control, here's some random expert which will say now, risk 5, risk too high, don't do it .. 

And number of such systems would be hundred of thousand in nex few years?

Who would manage and rate such sytems? There is a risk og  whether it will become lobbying, want to install AI system get a certificate, it costs, it is done by some company just pay. And once again the question of who those experts would be

 

Your thoughts

Clibeanna
AI Artificial Intelligence AI alliance

Tráchtanna

In reply to by Anonymous (not verified)

Profile picture for user n002zc29
Curtha isteach ag Linas Petkevičius an Sat, 02/11/2019 - 12:05

Hey Maximilian,

Realy interesting notes and points, realy appreaciate the answer.

As far as my personal opinion goes it's not like we don't need the regulations, but we don't need overregulation over things. AI research is already slowing down in health and public sectors due GDPR, and now this USA and China would eat AI compacies and university research alive under such regulations in these fields. Anyway, many problems like unbias or smh are mathematical problems, and can be formulated and solved, but still some of the claims, sound like we already have the general artificial intelligence. And that ban AI systems at level 5 is scaring me as a researcher as hell:

 

User
Curtha isteach ag Anonymous (not verified) an Thu, 31/10/2019 - 20:27

User account was deleted