The War in Ukraine and AI Regulation: Some Controversial Takeaways

Since the beginning of the war, I have often been asked what the Ukrainian Expert Committee on Artificial Intelligence is doing now.Are we still doing policy-makingCertainly not.

Since the committee is a social institution, the same thing happened to it as to the rest of the other healthy structures during the war - it was urgently reorganized and focused on solving the main problem of the country.Therefore, some members of the committee are now in the front-line trenches. 

IMAGE: Serhiy Yarmolenko, Lawyer, Member of the AI Committee since 2020 pictured in military uniform on the batlefield  
IMAGE: Serhiy Yarmolenko, Lawyer, Member of the AI Committee since 2020 

Other people volunteer, work for the authorities or advise the military on the use of modern technologies (including AI) and directly implement specific projects to reduce the number of russian soldiers.

But in this post, I would like to discuss how the war and its real challenges have influenced my understanding of the regulation of AI, especially taking into account such a radical event as a war. 

 

Background 

In December 2020, the Cabinet of Ministers of Ukraine approved the AI development concept and its implementation plan.This was preceded by a year of public discussion of this document and significant changes to the original document, as civil society organizations and activists mainly defended human rights, privacy and ethics, and not issues of security or the use of AI in case of war.

This is not about human rights, AI ethics and privacy not being a priority. Without doubts, they are truly the #1 priority. But the problem is that among the experts who were trying to regulate AI, there were a lot of experts on ethics, human rights and privacy, and very few of those who practically understand how AI works, its limitations, etc.

As the Head of The Expert Committee on Artificial Intelligence, during the process of agreeing on a document, I often had to “receive” public accusations on the verge of speculation in “inhumanity”, “desire to subdue the world”, “chipization”, etc. from people whose understanding of AI is more “magical” and “esoteric” than realistic.With the beginning of the war, everything fell into place and even some of my public opponents, privately apologized, explaining their behavior and that it was their job for which they were paid, but not their personal opinion of how an ecosystem of AI should work in the country. 

 

What did the war show? 

War is good and bad at the same time because you stop paying attention to small details and start paying attention to essentials.   

  1. Russian agents

During the development of the document, a lot of incomprehensible (unqualified) people in AI tried to wedge themselves into the process. Even a law firm from Russia sent us a proposal to develop a concept...

All these characters took a lot of time, offered idiotic or manipulative initiatives, tried to regulate the industry and in every possible way delayed the adoption of the document for various reasons…mainly based on ethics and human rights. With the beginning of the war, many of these people declared a public pro-russian position. Now we can clearly say that these were russian agents, whose goal was simply to reduce all normal initiatives to zero.

  1. Questions about the use of AI for military purposes should not be removed but discussed separately  

During the development process, we tried to create an ideal document that wouldn’t have any complaints from the public.Because of this, many things had to be removed from defense and cybersecurity sections. Looking back, I think it was a mistake.

It was necessary not to remove these parts, but to design a document for two scenarios - “peacetime” and “wartime”. In this case, all useful developments would be applied in the case of “Plan B” and there would be no need to compromise in them “as for peacetime”.

  1. The data privacy for small countries and AI regulation requirements should be different than for large countries. 

Since the beginning of the war in 2022, the bulk of the work on creating AI systems in Ukraine, which are now used for military purposes, has been taken over by the private sector - activists or private companies. 

Why did it happen?Small and medium-sized countries don’t have the resources to create state AI systems in advance for such crises. It’s not difficult to understand that a lot of systems were created without taking into account the requirements of privacy, etc., because the goal was to survive.

If these activists and private companies strictly followed the laws, then perhaps Ukraine would not be so good on the battlefield right now.Therefore, small and medium-sized countries shouldn’t blindly copy the legislation of large countries and should foresee “special regimes” in case of similar situations (including the understanding of what to do with the data privacy of individuals accumulated during the war and after it ends).

As practice shows, the ability of countries to quickly create AI systems used in war is the key to survival.

  1. In discussions about the observance of human rights, we shouldn’t forget about the country's competitiveness

When discussing AI regulation, I often see a desire mostly of activists and civil society representatives to regulate the industry and protect everything and everyone as fully as possible.

But as soon as I bring into the discussion the issue of competition with other countries and the fact that the Western world can’t afford to lose competition in AI to China and russia, this argument allows them to switch their opponents from dogmatism to the search for a compromise and practical solutions. 

The case with Ukraine and the active use of AI by both sides shows how important it is to find practical solutions that allow the country to develop AI, but not compete with regulations.

 

Summary

This topic can be developed further in detail, but the main thing that the war showed is that it can't be won with piles of documents on the regulation of AI - the war can only be won with really developed AI technologies, developed infrastructure and qualified engineering and management personnel.

Therefore, if Western countries want to compete with China and other potential adversaries, then it is worth significantly revising the balance of investment between regulation and direct investment in technology development, as well as removing players with a conflict of interest from the regulatory process. 

 

Author:  

Vitaliy Goncharuk 

Linkedin: http://www.linkedin.com/in/vactivity 

Head of The Expert Committee on Artificial Intelligence of Ukraine in 2020-2022 

Etichete
Artificial Intellience EUstandswithUkraine Ukraine war war in Ukraine