Requirements for High-Risk AI Systems

The EU is leading the way in regulating the use of AI with its proposed AI Act, which aims to ensure that AI systems in the EU market are safe and do not pose a risk to citizens' rights. The Act classifies AI systems into low, limited, high or unacceptable risk categories, and systems deemed high-risk must meet the requirements in Chapter 2 of the Act. This includes legal requirements for high-risk AI systems in areas such as data governance, documentation, transparency, human oversight, robustness, accuracy and security.

This paper focuses on Chapter 2 requirements and highlights a gap between necessary compliance and what will be considered sufficient to avoid liability. The compliance of high-risk AI systems with the EU AI Act is a critical question for those involved in AI system development and deployment.

Download our white paper, Requirements for High-Risk AI Systems.

Requirements for High-Risk AI Systems
Requirements for High-Risk AI Systems
Tunnisteet
AI AI Research ai ethics Artifical Intelligence ArtificalIntelligence