Recent years saw a steady increase in discussion about AI and governance, allowing for trail-blazing developments in AI regulation. However, one domain is still something of a figurative minefield: applications of AI for military purposes. There are various perspectives - some argue many forms of military AI should be internationally banned, others assert that remaining competitive is of utmost importance, and that any regulation may ultimately be harmful. So, how should we approach the regulation of military AI, and is there already something out there that can point a potential way forward?
Artificial Intelligence (AI) is spurring great digital transformations. It offers numerous benefits to be taken advantage of, yet there are also certain challenges and potential risks due to the specific features of the technology and how it can be used in different contexts. This duality of AI, the balance of advantages and risks, may be especially amplified in a military or defence context. Just as AI can be used as a tool to optimize the lethal capacity of systems, as in the case of kamikaze drones, it can also be used to save the lives of civilians and soldiers and defend against numerous threats, including physical and cybersecurity attacks.
Military applications of AI are often deployed in high-stake situations, introducing new factors that need to be accounted for or raising the bar of existing ones. For instance, there are not only ethical risks surrounding human rights, moral responsibility and accountability, but also significant operational and domain-specific risks such as the reliability of such systems in extreme situations and the severe consequences should they be vulnerable to attacks. In perhaps no other domain is it more crucial to rely on safe, robust and trustworthy AI systems.
Yet, today, there are no formal certification processes, universally applicable standards or governance frameworks assuring such characteristics of AI in military contexts. International discussions are increasingly focusing on those issues more seriously. The REAIM2023 conference on Responsible AI in the Military Domain, organised by the Government of the Netherlands and co-hosted by the Republic of Korea in February, marked an important step towards international discussion. The conference attracted participants from over 80 countries, various stakeholders and institutions to begin discussion about the challenges of AI in military uses. Governance frameworks for responsible application of AI in the military domain was among the areas of focus. During its presentation on this topic, the European External Action Service (EEAS) highlighted the lack of international, multilateral agreements or any existing governance framework that covers the use of AI in the military. Although countries like France and the US are exploring the military use of AI at a strategy level and NATO has already adopted its own AI Strategy, most countries have not taken such steps yet and current efforts are still far off from comprehensive multilateral frameworks (such as the UN Treaty on the prohibition of nuclear weapons, for instance) which are crucial for ensuring continued global safety, peace and security.
Given AI’s novel nature, such concrete governance frameworks or international arrangements for responsible AI for military purposes will still require time to be developed. And as we wait for discussions to formalise into international agreements, pre-existing state-of–the-art civilian regulations will provide an important source for inspiration. One such piece of regulation will be the European AI Act, once it is adopted.
In April 2021, the European Commission proposed a regulation laying down harmonised rules on AI (AI Act), the Union’s legal framework addressing specific uses of AI with the twin objective to ensure people’s safety and fundamental rights, while promoting trust and uptake of AI across the EU Single Market. The proposal follows a horizontal, proportionate risk-based approach as it prohibits certain particularly harmful AI practices and introduces specific requirements and conformity assessment procedures for “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons.
A short disclaimer is necessary however: while the AI Act prohibits or sets requirements for AI systems put on the EU market, it explicitly excludes from its scope AI systems used for military purposes only. Council’s General Approach on the AI Act confirmed this exclusion by referring to systems developed for defence and military purposes as subject to public international law (also according to the Treaty on the European Union, chapter 1, title V). However, there are “dual-use goods” such as for example drones or biometric recognition systems, that can be used both for military and non-military purposes. If such civilian uses are possible, these systems may also become available on the European market and thus necessitate appropriate safeguards.
Despite the exclusion mentioned above, the framework and horizontal requirements of the AI Act can be also of some relevance for AI in the military domain and could lead to certain alignment of relevant standards.
- Risk - tiered Approach: As noted during the discussions, the classification of AI applications into categories of risk may present a suitable approach also to the regulation of AI military applications. While AI Act’s prohibitions or the high-risk classification are unlikely to be directly applicable to military AI systems, AI Act’s general tiered structure might be relevant as a regulatory approach. Categorisations under which some military systems are prohibited (such as Lethal Autonomous Weapons or other AI systems that directly facilitate war crimes) while subjecting other critical applications to requirements that ensure trustworthiness and accountability could also present a suitable structure for regulating military AI.
- Relevant Requirements: The AI Act’s requirements of risk management, documentation, transparency, auditability, robustness, and cybersecurity are essential for high-quality AI in critical contexts. Exchanging knowledge on the state-of-the-art solutions proposed by the AI Act and how to deal with challenges and risks could be therefore also useful for the defence domain, while taking due account of its specificities. The military domain might, for example, require metrics regarding the use of resources or efficiency: for instance, a resource like electricity may be scarce in some military contexts, making it necessary for users to be aware of how much power an AI system uses (this is not a strict requirement in the AI Act). Similarly, requirements on cybersecurity may need to be stricter, as the dangers of cyber warfare are projected to be significant, and it becomes increasingly important that systems can operate independently of AI developers or technicians, in some situations where the latter may not be present.
- Human Centric Approach: Most importantly, it is imperative that weapons, including those supported by AI systems, continue to respect International Humanitarian Law (IHL). This is an even more apparent issue when autonomous weapon systems are concerned. The AI Act’s requirements are formulated based on the principles of respect for fundamental rights, and while the requirements outlined above are still largely relevant for military systems as well, in military contexts there can be justified restrictions to those rights that will have to be taken into account, while ensuring they remain within the limits of what is strictly permitted under international human rights law. Furthermore, the requirements of AI systems in military applications will need to account for IHL, in particular in times of conflict, ensure that systems cannot be used to target civilians or persons outside of combat, and in many other ways help save lives.
Finally, regardless of how the military or defence sector will be regulated in the future, the AI Act may create beneficial spill-over effects. Namely, the sector will be able to buy EU market certified high-risk AI systems produced for civilian or dual purposes, thus indirectly increasing the reliability and trustworthiness of defensive or military systems. Dialogue between the military and the civilian sector would be highly useful to engage in an early exchange of ideas on how these two sectors interact and to draw inspiration from horizontal principles on AI to help increase the trustworthiness of military AI systems.
- Zaloguj się, aby zamieszczać komentarze