[Discussion] Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence

The recommendations.

Main keys:
1) mandatory classification of AI systems according to risk of damage (category 5);

2) a mandatory category designation scheme and risk assessment;

3) the obligation for companies to establish a contact (understand “algorithms”) position similar to that of the Data Protection Officer under BDAR.




Who would manage and would rate such systems?

For example city decides to implement smart system at the crossings for traffic control, here's some random expert which will say now, risk 5, risk too high, don't do it .. 
And number of such systems would be hundred of thousand in nex few years?

Who would manage and rate such sytems? There is a risk og  whether it will become lobbying, want to install AI system get a certificate, it costs, it is done by some company just pay. And once again the question of who those experts would be


Your thoughts


Trimis de Maximilian J. … la Joi, 10/31/2019 - 20:27

Labas Linas,

I was at the German Digital Summit where representatives of that commission presented their work. I must admit that I deeply share your concern. And since then I am really worried about the German (and European!) approach. Ethics in AI was a huge topic. But seemingly rather out of despair, because the German digital industry has no idea how to sustain between data capitalism and totalitarism. Germany's position can be summed up as follows: "China has data, the US have money, we have...well at least we are trustworthy people!"...People would prefer products and solutions with trustworthy AI and would even pay more for it. (If this were true, we would all have switched to other free but more encripted messengers than whatsapp).

Anyways, ethics shall be the joker and USP of AI made in Germany. Unfortunately, when it comes to implementing ethics in AI, the speakers did not get far beyond the mere repetition of common sense demands like AI should be unbiased, understandable, etc. It was refreshing to hear State Secretary of the Ministry of Justice Gerd Billen explain: "Ethics works without laws, but laws won't work without ethics". That might explain why it is so tempting to ask for the technical solution of "ethics by design", i.e. the AI designer shall accomplish to program an AI algorithm that will behave ethically correct FOREVER...without knowing a) from which data it will learn next, b) how it exactly interprets its data...AND most importantly c) without knowing how to define "ethically correct". As if they knew this whole ethics plan is flawed, they came up with a backup plan: "Gaia-x" the new state funded and controlled cloud platform service to be launched in late 2020 in cooperation with France (that's basically all they told so far)....It will become a tough challenge to make this thing a real ALTERNATIVE to "evil" data capitalism and data totalitarism instead of just a COMBINATION of both. I wish them and us all the best.

Ca răspuns la de Maximilian J. …

Trimis de Linas Petkevičius la Sâm, 11/02/2019 - 12:05

Hey Maximilian,

Realy interesting notes and points, realy appreaciate the answer.

As far as my personal opinion goes it's not like we don't need the regulations, but we don't need overregulation over things. AI research is already slowing down in health and public sectors due GDPR, and now this USA and China would eat AI compacies and university research alive under such regulations in these fields. Anyway, many problems like unbias or smh are mathematical problems, and can be formulated and solved, but still some of the claims, sound like we already have the general artificial intelligence. And that ban AI systems at level 5 is scaring me as a researcher as hell: