AI systems being a digital technology never function standalone but instead they are part of a larger ICT infrastructure. They are subject to cyber-attacks the same way other ICT systems are. For that reason, AI systems must implement security measures, which are proportional to the identified risks and threats.
The European Agency for Cybersecurity (ENISA) has recognized early on a need for work in this area. During the last few years, the Agency has published multiple reports on AI and cybersecurity, starting with a Threat Landscape for AI, which elaborated new venues in manipulation and attack methods, as well as new privacy and data protection challenges.
One identified area of particular significance in the AI Threat Landscape was the need to ensure that ML algorithms are resilient to adversarial machine learning, data manipulation and data poisoning. Consequently, ENISA published a study on how to prevent cyberattacks on machine learning systems and identified a number of mitigating security controls for organisations to deploy.
Continuing the work, the Agency developed a multilayer cybersecurity framework for AI and guidelines on how to use existing knowledge and best cybersecurity practices to secure AI systems, operations and processes. Recognizing the fact that each business domain has specific cybersecurity requirements and threats, 2 use cases on cybersecurity and privacy in AI - Medical imaging diagnosis and Forecasting demand on electricity grids) were published at the same time. Both of them acknowledge that security and privacy are equality important, closely related to each other therefore, to achieve secure and trustworthy AI ecosystem, a balance between functionality, security and privacy must be achieved in each scenario.
In anticipation of the cybersecurity implementation requirements and conformity assessment of high-risk AI systems defined in the AI Act proposal, an analysis of cybersecurity standards for AI was conducted by ENISA. Last but not least, recent ENISA's study identified five key research needs as proposals for future policy and funding initiatives at EU and MS level.
As new cybersecurity challenges and related issues are discovered, ENISA plans on continuing its work on developing guidelines in support of AI stakeholders within EU.