The Impact of the EU’s New Data Protection Regulation on AI

The Impact of the EU’s New Data Protection Regulation on AI
(0 Bytes - PDF)
Herunterladen

The EU’s new data privacy rules, the General Data Protection Regulation (GDPR), will have a negative impact on the development and use of artificial intelligence (AI) in Europe, putting EU firms at a competitive disadvantage compared with their competitors in North America and Asia. The GDPR’s AI-limiting provisions do little to protect consumers, and may, in some cases, even harm them. The EU should reform the GDPR so that these rules do not tie down its digital economy in the coming years.

Tags
Artificial Intelligence GDPR AI

Kommentare

User
Von Richard Krajčoviech am Sa., 22/09/2018 - 13:52

Every regulation, which is protecting customers, imposes obligations on business and increases barriers for entering market. It is about proper balance between those aspects. I think that GDPR is well written, but horribly interpreted regulation.

One point for all: GDPR does not "[Require] companies to manually review significant algorithmic decisions". This is huge simplification stated by somebody extremely conservative and you should question the advisor who made this statement. Manual review would be a viable solution, but this is definitely not what GDPR is asking for.

Article 22 of GDPR gives the data subject (e.g. customer) right to not to be subject to a solely automated decision with significant legal effects, unless it is necessary for a contract or it is based on the data subject's explicit consent, while implementing "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision".

There is nothing requiring manual review, there is a right of customer to ask for human review. i.e. complain to a human about the automated decision with significant legal effects. And then it depends on the quality of the automated decision, how many relevant complaints you receive.

Actually, this requirement is very similar to the US ACM requirement #2 "Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions."

The major weekness of GDPR is its poor PR, which resulted in  extremely conservative, busines adverse explanation.

Antwort auf von Richard Krajčoviech

Profile picture for user n002871y
Von Daniel Castro am Sa., 22/09/2018 - 16:38

Thank you for this thoughtful comment. Respectfully, however, I disagree with your conclusion that the GDPR does not impose a significant obligation on organizations to be able to conduct manual (i.e. human) review of certain solely automated decisions.

As described in the WP29 guidance, "The controller cannot avoid the Article 22 provisions by fabricating human involvement. For example, if someone routinely applies automatically generated profiles to individuals without any actual influence on the result, this would still be a decision based solely on automated processing. To qualify as human involvement, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the relevant data."

If your point is merely that a particular review only must occur after a complaint, then I agree with you (and indeed, while your quote is from a section heading, of the summary of the report, the actual text of the report describes this point in more detail.) However, the burden of human review is substantial, and requires an upfront consideration on the part of businesses. Moreover, the potential for significant requests for manual review necessarily limits businesses from automating certain processes. This limit is not tied to the accuracy of those decisions. The data subject has no incentive not to seek a human review of an automated decision that has an appropriate, but adverse, effect (e.g. denying a loan to someone who is overextended on credit already). Without some countervailing measure, there is no backstop to prevent excessive requests for human reviews.

In evaluating any regulation it is important to look at both its intent, but also how it can lead to unintended consequences or even be abused. In this regard, the GDPR falls short.

 

Antwort auf von Daniel Castro

User
Von Richard Krajčoviech am Sa., 22/09/2018 - 18:06

We should be more precise, what decisions are subject to Article 22 based on WP29. These are

- legal rights (i.e. something guaranteed by law). The examples given are really serious to be done by artificial intelligence at its current stage or in near future.

- decisions with "similarly significant effects". WP29 admits that it is difficult to be precise here and gives examples like affecting someone's financial circumstances, access to health services, employment opportunity, access to education etc. Again serious things, which seriously impact life of involved people. I appreciate enthusiasm for AI and I support research in this area, but the mentioned areas are not a playfield for such experimenting per my opinion. I would doubt this prevents advances in AI. There are plenty of other applications, which do not affect people's life so seriously, where AI can be freely applied and where it can advance.

- special cases of general usage, where all given examples are preventing abusive targeting of vulnerable groups or societies and other unfair business practices. Again, I doubt this prevents advances in AI.

 

WP29 then gives examples of automated review exempt from human review. The most relevant to our discussion is the acse, when company receives huge number of applications, to make a shortlist.

 

There are many cases in between, but the message seems quite clear:  AI is not mature enought to make serious decisions about human lifes and whoever wants to apply it must do it responsibly. Per my opinion, these are fair requirements for responsible performance of business and for responsible application of artificial intelligence. I believe that this does not prevent further research and development of artificial intelligence in areas, where it might be of help.