The European Commission proposal for the AI Act represents a significant milestone in the regulation of Artificial Intelligence (AI). The report attached (recently published by JRC and DG CNECT) focuses on the cybersecurity requirement for high-risk AI systems, as set out in Article 15 of the regulation. It presents a high level analysis in the context of the rapidly evolving AI landscape, and provides a set of key guiding principles to achieve compliance with the AI Act.
The proposed AI Act focuses on AI systems. The internal structure of AI systems involves a range of components. Although AI models are essential components of AI systems, they do not constitute AI systems on their own. The AI Act cybersecurity requirement applies to the AI system as a whole and not directly to its internal components.
In order to ensure compliance, a security risk assessment should be conducted taking into account the design of the system, to identify risks, and implement the necessary mitigation measures. This process requires an integrated and continuous approach using proven cybersecurity practices and procedures combined with AI-specific controls.
Although the state of the art for securing AI models has limitations, AI systems may still achieve compliance with the AI Act's cybersecurity requirement as long as their cybersecurity risks are effectively mitigated through other measures not exclusively deployed at AI model level. However, this may not always be possible, and indeed for some high-risk AI systems using emerging AI technologies, it may not be feasible to achieve compliance with the cybersecurity requirement of the AI Act unless in their design these system additionally introduce new cybersecurity controls and mitigation measures of proven effectiveness.