Are artificial intelligence courts a discrimination risk?

AI, if rightly implemented, might be of great help in creating true equality for vulnerable groups before the justice system.

Many have highlighted the risks of discrimination against vulnerable groups when utilizing artificial intelligence (AI) for judicial decision-making.

They explain how AI algorithms learn by observing and inferring from human behavior, concluding on involuntarily racial or gender prejudices embedded in people’s preferences and decisions. AI would then be poised to reproduce unfair preconceptions, which presents an even greater risk if AI somehow replaces human judges.

There are genuine concerns that AI systems could amplify discrimination learnt by watching humans. However, such perspectives might fail to understand how AI might be of great help, if rightly implemented, in creating true equality for vulnerable groups before the justice system.

Better societies are built by the separation of power and access to impartial and objective justice. However, things are far from perfect, particularly for the weakest.

Because life is temporary, rights denied during years of legal proceedings have a lifetime impact. Cases take too long, lawyers are too expensive, and the long-term litigation uncertainty is difficult to bear. Human justice is too often unable to provide fast, affordable, uncertainty-free decisions. When justice is delayed, justice is in some way denied.

Impartial and fair Judges are an essential element of any peaceful society.Still, we probably expect too much from them — no-feelings, no-personal-agendas, no-political or religious ideals, no-needs, no-hates, no-passions, no-weaknesses, no-mistakes, round-the-clock-work, no-worries-about-poverty-at-retirement — just the cold reading of the applicable law, the jurisprudence, and its application through an objective and deep understanding of the facts presented. The fact is that not all judges attain such professional and personal high standards, particularly in undeveloped countries.

If we want judges to be like the wonderfully objective entity we have in mind, AI-assisted courts — which seem technically feasible today — would be of great assistance.But instead of celebrating the prospects and making sure it works as we want, we raise barriers to it. Warnings of the impending dangers of racial profiling and discrimination are voiced often, but the real risk is not taking advantage of the new technological opportunity to drastically reduce discrimination.

We deem that access to a prompt, impartial, and objective justice is a fundamental right or at least one that should have priority over the right to be heard by a human judge. But we don’t all agree on that prioritization, and this is the discussion to be held before AI can be truly embraced in courts.

The risk of lack of transparency of AI-assisted decisions is quite frankly overstated. Indeed, only specialists understand the machines’ processes, but reading inside a judge’s brain is totally impossible. If transparency is a real problem, we already have it with human judges, and it is unsolvable, contrary to working with machines. Although the facts are presented in a courtroom, we simply can’t tell what goes on in the judge’s mind and what led to their verdict. On the other hand, machines can be audited at any time and do not have the privilege to hide the their real motives.

Then again, computers are seen as lacking humanity. But if humanity is listening carefully to what a case is about and what is at stake, the better talker and the fastest replier wouldn’t often get an unfair advantage before a human judge. Powerful parties with deep pockets and better lawyers can prepare for a case well in advance or drag a lawsuit for donkey years. Justice would be better served when judges’ emotions cannot be manipulated and would, quite likely, put in an equal standing between rich or poor, powerful or weak litigants.

The weak and the poor do everything possible to avoid being in a legal battle with the rich and powerful for several valid reasons, like the fear of being labeled troublemakers by potential employers, lack of funds for a potentially lengthy legal battle and, above all, due to litigation-related uncertainties. Being poor and vulnerable means, also, that risks are just riskier. So, they drop the case with a settlement at best or coerced into submission at worst. In many cases, they accept unfair resolutions and face the crude but concrete reality that the law is not so equal after all.

On the other hand, the rich and powerful can afford a legal battle, are less likely to waive their rights (except when there is a reputation risk), are rarely coerced into submission, and do not have to keep silent in unfair situations. Most importantly, they are not as afraid as the poor facing the judiciary. Human justice as we know it, even when functioning and independent, protects and defends the poor and the weak when they are right but only up to a certain point. The chances of being properly defended in court are often money-related, and the time it takes to get justice renders the situation much less supportable for the poor. It’s no shocker that there may be forces against changing the status quo of lengthy human courts procedures.

Discriminated races, genders, religions, and sexual orientations are, on average, more vulnerable, poor, and in many cases psychologically weaker. Discriminated racial groups might have, in general, fewer years of formal schooling as well. It is socially shameful to be associated with some vulnerable groups (for example, to be of a different sexual orientation) in many countries. The fusion of fear, risks, weakness, poverty, stigma, and unacceptance triggers complex effects: one of them is that discriminated groups are less likely to file complaints before the courts than non-discriminated groups.

This is probably one of the most significant sources of invisible discrimination: differential real access to justice. The victim of discrimination for sexual orientation that accepts its fate in silence suffers double damage: first from the perpetrator, second from the institutions that do not provide a judicial environment where they could feel adequately protected. The fact that cases are rarely denounced and that the courts rarely condemn their actions incites the perpetrators to continue, under a realistic feeling of impunity, even if the antidiscrimination laws say otherwise.

Incorporating AI into the judicial system has the potential to change that. Cases would be handled much faster, cheaper, and without fear or shame of being subjected to a hidden-racist judge and without the likely disparity of having less-well paid lawyers to present the case. AI could raise some of the invisible barriers that prevent discriminated people from getting justice.

Studies may try to explain all the risks AI poses and may discover new ones each day, but the discussion should also and mainly focus on striving for a better, faster, fairer, and more accessible judiciary process, particularly for the weakest.

Discrimination is a complicated problem to tackle by the law because it is so challenging to demonstrate, so very strong efforts are required to fight against it. Even if implementing AI-assisted courts gets quite complex along the way, it seems necessary if we are serious about obtaining true equality — affordable, fast, objective justice for all. Concerns about AI’s potential discrimination are valid but are unequaled to the advantages of implementing AI in the judicial process, especially for vulnerable groups.

Most of the research on AI judiciary has been for courts in developed countries. What about other countries where corruption is rampant in the judicial and law enforcement, where financial resources are insufficient to pay decent salaries to judges and other law-enforcement officials appropriately? What about nations where the police or corrupt politicians attack lawyers, prosecutors, judges, or their families if they don’t do what they expect? If judges live in fear, what about the vulnerable and poor? The judiciary system in those nations is ultimately inefficient and largely part of the problem.

The objective is to render justice faster and more affordable, thus making it available for all while becoming even more objective and transparent. It means radically changing the culture and traditions of litigation and the law profession and incorporating leap-forward AI into the courts’ proceedings.

Commenti

Profile picture for user n007apxh
Inviato da Gustavo Ariel … il Mar, 31/08/2021 - 18:58

I published this a few days ago at www.medium.com. I thought about sharing with the group as well.

User
Inviato da Matthieu Vergne il Mar, 30/11/2021 - 22:11

Although AI systems seem to have some great potentials in legal matters, they cannot be compared directly to judges. Either they are so powerful and reliable that we replace the judge, or it can only serve as a support to the judge, who always have the last word to say.

In the second case, there is not much AI can do beside allowing a voluntary judge to exploit some technological features to help him/her doing the job.

For a real breakthrough, AI should be way more advanced. But even if it becomes such an interesting tool, we should not forget that judges are judges because they are  considered competent in judging, based on experts evaluations. If an AI should be transparent, this is first of all for those experts to be able to judge the performance of the AI system in judging human affairs.

More globally, this is all about knowing which requirements an AI should fulfil in the legal domain to be used as a tool there. But so far, not much has ben done to focus on requirements elicitation and satisfaction regarding AI systems. We mainly judge them on a statistical basis: does it seem to provide a reasonable performance on a subset of cases? If it seems so, then let's use it. That is weak. Too weak to rely on it to decide about human affairs which, as you mention, may have lifetime impacts. Or at least too weak compared to how we evaluate human judges before to trust them for doing their job.

I would recommend to give a look to the Requirements Engineering (RE) field, which has already achieved a lot of research reqarding requirements, including on law-related matters. And since there is a workshop that specifically focuses on RE research for AI systems, I would specifically recommend to start there:

https://futurium.ec.europa.eu/european-ai-alliance/open-discussion/requ…