“Black letter” Ethics for AI: Added Value and Limitations

Imposing an ethical duty on AI systems and stakeholders to comply with binding rules of law, regulation and principles adopted by legislative, executive or judicial organs – hereafter, “black letter” ethics – looks like a modest ambition.  It is just the contrary. 

AI compliance with black letter laws is not a given.  Take the human rights protected by the European Union (“EU”) Charter on Fundamental Rights (“the EU Charter”)[1].  The provisions of the EU Charter are only a source of obligation on the EU institutions and Member States’ actors, not on private parties[2].  AI systems, developers, producers and users are thus not bound to observe basic human rights like the right to dignity when they design, experiment, roll out and use technology in society.  A requirement to build, operate and maintain AI systems in compliance with the EU Charter would no doubt be a significant progress. Similarly, provisions of the black letter law are subject to explicit or implicit limitations. These restrictions are often the byproduct of complex negotiation dynamics inherent in all rule making processes.  Take Article 22 of the General Data Protection Regulation (“GDPR”)[3].  The right of individuals to resist to automated decision-making can be excluded when this is necessary to allow the conclusion of a contract between a data controller and a data subject.  Such limitations are, however, no longer required when we envision the introduction of ethical duties on AI systems and stakeholders.  This is because the purpose of ethical duties is not to provide enforceability by design to the black letter law, but instead to set guidelines of good behavior towards AI systems and stakeholders.  Limitations found in black letter law may thus be legitimately relaxed.

At the same time, black letter ethics for AI has two downsides:

First, AI systems and stakeholders that are compliant by design with existing laws and regulations may claim wholesale immunity from legal litigation, indemnification and remediation on the ground that due care has been exercised by AI systems and stakeholders at the compliance stage.  Blanket legal immunity in case of diligent black letter compliance is however inappropriate.  Legal rules and principles can be bypassed, circumvented and tweaked.  In such cases, victims of harm will be denied redress, both material and moral.

Second, if applied across the board, black letter ethics may negate some opportunities of ethical improvements generated by the introduction of AI systems in society. While our common assumption when we think of AI is one of legal “gaps”, the existing legal system may also display legal “redundancies”.  This can be seen in areas where the legal system sets “human preserves”.  In such cases, the legal system grants humans a monopoly over decision making (think, for instance, of professions like lawyers, cab drivers, doctors, and others). When this is the case, our laws may deny opportunities to citizens if the use of AI in decision making could lead to the removal of biases.  Moreover, unlike computer code, existing laws and regulation are abstract, messy and unpractical. 

It is therefore unclear whether the black letter law can be translated with the degree of instructional and interpretational accuracy required by AI systems.


[1] Charter of Fundamental Rights of the European Union, [2012], OJ, C326/02, available at https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:12012P/T… (hereafter “the EU Charter”)

[2]  Association de médiation sociale v Union locale des syndicats CGT [2014], ECLI:EU:C:2014:2, available at https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=CELEX%3A62012CJ0176.

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, [2016] OJ, L-119, Vol. 59,  available at https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R06… (hereafter “GDPR”).