Guidelines for Secure Development and Deployment of AI Systems

As part of the annual Internet Governance Forum (IGF) 2024, held in Riyadh from December 15 to 19, Kaspersky presented guidelines for the secure development and deployment of artificial intelligence (AI) systems. The document aims to help organizations avoid risks associated with the adoption of AI technologies by providing cybersecurity requirements that should be considered when implementing these systems.

The document is a resource for developers, administrators, and AI DevOps teams, and provides detailed, practical advice to address technical gaps and operational risks. The guidelines are particularly important for organizations relying on third-party AI models and cloud-based systems, where vulnerabilities can lead to significant data breaches and reputational damage. 

By embedding security-by-design principles, the guidelines help organizations align AI deployment with standards like ESG, for example, and international compliance requirements. The paper addresses key aspects of developing, deploying and operating AI systems, including design, security best practices and integration, without focusing on foundational model development.

Kaspersky presents this document as best practice and is very interested in feedback from the community to further develop the principles. We would also be very happy to discuss this and receive suggestions and ideas.

Please find the principles here:

https://content.kaspersky-labs.com/se/media/en/business-security/guidelines-for-secure-development-and-deployment-of-ai-systems.pdf

 

Тагове
AI in cybersecurity cybersecurity secure development of AI systems