Harmonised standards as a key tool for the implementation of the future AI legislation

Artificial intelligence (AI) is recognised as the most promising “general purpose technology” of the recent years: AI can constitute a product by itself and it is also embedded in many other products in a variety of sectors. The tremendous impact of Artificial Intelligence to the way people live and work is evident, both in terms of benefits and possible negative effects on society, such as concerns around fairness, transparency, reliability and trust.

Since 2018, The European Commission (EC) has been defining its AI strategy and agenda along three key pillars:

       Boosting the EU's technological and industrial capacity and AI uptake across the economy;

       Preparing for the socio-economic changes brought about by AI;

       Ensuring an appropriate ethical and legal framework for AI.

From that point on, the EC put in place actions to pursue economic development opportunities linked to AI while addressing its social, ethical and legal implications. In this journey, main milestones were the Coordinated Plan on Artificial Intelligence[1], the Communication on AI strategy entitled Building Trust in Human-Centric AI[2], theestablishment of the High-Level Expert Group on AI (HLEG)[3] which delivered great contribution to the world AI community including the Ethics guidelines for Trustworthy AI[4]and theAssessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment[5].Moreover, with the White Paper on Artificial Intelligence[6] the European Commission recognized the need to provide an improved regulatory framework, including a possible new Regulation dealing with AI risks for safety and fundamental rights.

In April 2021, the European Commission proposed an AI Act (AIA) which aims to introduce new rules to turn Europe into the global hub for trustworthy AI. As Kilian Gross - Head of unit in Artificial Intelligence DG Connect European Commission – pointed out[7], the combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States is going to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.

The proposed Artificial Intelligence Act is structured as a New Legislative Framework (NLF) type legislation. NLF is a well-experimented regulatory scheme that has been successful in ensuring the development of a successful internal market for safe and compliant products; such scheme provides high-level provisions and essential requirements in the main legal act, while economic operators could achieve compliance with these requirements through the use of harmonised standards. Harmonised Standards are European Standards produced by European standardisation organisations (notably CEN/CENELEC and ETSI) in response to a Commission’s standardization request to provide the technical detail necessary to achieve the ‘essential requirements’ of a harmonisation legislation. Although harmonised standards are not legislation themselves (see picture below), they are key enablers of the European Single Market as they ensure that legal compliance is achieved by market-driven technical solutions; thus, they empower digital transformation for the whole society, boosting market development and increasing the international competitiveness.

Standards vs legislation

The requirements of the proposed new regulatory frameworkon AI are set in the form of technical objectives that providers of AI systems will be expected to fulfil. Harmonised standards can thus be a key tool for the implementation of the future AIlegislation and contribute to the specific objective of ensuring that AI systems are safe and trustworthy across Europe by providing key detailed technical specifications for the actual compliance with the AI regulatory framework by providers. It must be also noted that harmonised standards, which are subject to regular review, allow for technological evolution and the take-up of the latest state-of-the-art.

Still at this very preliminary stage, with the AI Act being under negotiation, many efforts to take stock of existing AI standardisation are either completed or ongoing. In this context, the report AI Watch: AI Standardisation Landscape state of play and link to the EC proposal for an AI regulatory framework[8]recently published by the Joint Research Centre (JRC) of the European Commission is particularly important. It provides a first high-level analysis (to be complemented in a second version) of the current European and international standardization initiatives dealing with AI, analyzing their relationship with the requirements of the AI Act and their level of suitability and operational feasibility. Based on this mapping and analysis, the report identifies essential and core standards, and formulates preliminary recommendations on recognized gaps and under-elaborated AIA requirements.

The results are quite interesting. First, it is evident that standardization on AI can already rely in general terms on work done within several European and international standardization organizations. In fact, the general population of AI-related standards consists of around 140 specifications encompassing standards that directly address AI-specific issues and standards that are more tangentially related to AI, such as standards on enabling technologies for AI, like for instance the standards on Big Data. In the near future this number is expected to increase even more, as the number of AI standards published per year (see picture below) is foreseen to reach the peak of around 20 standards in 2021 and 2022, remaining significant until 2024[9].

Yearly distribution of standard publication

Secondly, although some gaps were recognized for the AIA requirements of “Data and data governance”, “Technical documentation”, and “Risk system management” requirements, they show that there is a core group of standards which are suitable and usable for the eight identified key requirements.

Summary of relevant standards of AIA key requirements

Other examples of ongoing efforts are the Road Map on Artificial Intelligence defined by the CEN-CENELEC Focus Group, the interactive mapping of the global standardization landscape on a set of technologies (see figure below) and the Report of TWG AI: Landscape of AI Standards[10] published by theEuropean Observatory for ICT Standardisation (EUOS)[11] within StandICT.eu 2023.

 

Monitoring of standards

The monitoring and analysis of published standards should remain a continuous effort to ensure the effective alignment between standards and the upcoming regulation, and thus support the organizations in compliance. An open discussion among relevant stakeholders should be established to explore the standardisation challenges and needs in relation to the new AI framework and identify the role and contribution of the different actors over the next few years.

For example, during the “High-Level Conference on AI: From ambition to Action[12] organised the 14th-15th September by the Slovenian Presidency of the European Union Council, the Slovenian Ministry of Public Administration together with the European Commission (DG Connect), a panel on AI Standardisation was held. The panel gathered experts from the standardisation and research field, as well as from the industry perspective. The panel showed a general appreciation for the choice of New Approach scheme, as harmonised standards are deemed to be an appropriate tool to deal with regulatory challenges associated with AI. Moreover, the early mobilisation and engagement of the standardisation community and constructive dialogue between ESO and Commission in the area were noted.  Some key challenges emerged. Namely:

1.      The need for technical expertise and broad stakeholder engagement (notably SMEs) to develop AI standards which adequately reflect the concerns of the AIA and can be easily applied by the operators;

2.      The importance of the availability of a large set of standards to be ready within 3-4 years from now;

3.      The need to possibly engage in further research activity as a precondition for standardisation in certain specific areas.

 

The full session on AI Standardisation can be followed in the video below:

 

The Commission, in preparation of the future standardisation request and to overcome these challenges, has actively promoted early engagement with standardisation organisations (and notably ESOs) to ensure that common views on the activities for future AI standardisation are agreed well ahead of the adoption of the AI Act.

For the success of the future Regulation on AI it will be key to ensure that the necessary standards for its full implementation are in place when the Regulation is fully applicable.

 

Resources

[1]https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence

[2]COM/2019/168 final

[3]https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

[4]https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

[5]https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-asses…

[6]COM/2020/65 final

[7] Report of TWG AI: Landscape of AI Standards, StandICT.eu 2023, 2021

[8]https://publications.jrc.ec.europa.eu/repository/handle/JRC125952

[9] AI Watch: AI Standardisation Landscape state of play and link to the EC proposal for an AI regulatory framework

[10]https://zenodo.org/record/5011179#.YVwN6ZpBwgw

[11]https://www.standict.eu/euos

[12]https://ai-from-ambition-to-action.com/

Tags
Standards Artificial Intellience AIA AI Act monitoring European Standardisation Organisations (ESOs)