The Governance of the AI Act: your questions answered

At the EU High-Level Conference on AI - From Ambition to Action in September 2021, among others we took a closer look at the governance framework of the European Commission’s Proposal for a regulation on Artificial Intelligence (AI Act).

The discussion thrived on the exchange of different perspectives and offered three main lessons.

  • Much could be learned from the implementation of the General Data Protection Regulation (GDPR) and applied to the implementation of the AI Act;
  • The implementation of the AI Act requires adequate expertise and capacity;
  • The balance between protecting human rights and facilitating AI innovation has to be struck.

You can find the full outcomes of the breakout session on “AI and Governance” as well as the recordings of the session at the blogpost published by Kilian Gross.

Public discussions on this important aspect of the AI Act did not end there. While the Council of the European Union and the European Parliament are in the process of negotiating the final legal text, we took some time to look into the numerous and pertinent questions and observations that the members of the European AI Alliance and other stakeholders sent in the frame of September’s conference.

Concerning the risk-based approach, the Commission has attempted to find a balance by addressing the risks linked to certain uses of AI in a proportionate manner, without unduly constraining or hindering technological development, or otherwise excessively increasing the costs of placing AI solutions on the market. We believe that the AI Act sufficiently addresses the different levels of risk that AI systems may pose to health, safety or fundamental rights by laying out four distinct categories depending on the intended use of the AI system:

  • Unacceptable risk arising from certain AI practices which are to be prohibited;
  • High-risk, which entails a number of obligations for providers and users of AI systems falling within this category. This category includes specific use cases where risks have materialized or are likely to materialize in the near future, based on solid, explicitly defined criteria for determining the level of risk;
  • Transparency risk, which entails obligations to communicate to people affected that they are exposed to an AI system; and
  • Minimal or no risk, which includes the large majority of AI systems currently in use and does not entail mandatory requirements for operators. Providers of any non-high-risk AI systems are however encouraged to engage in drawing up codes of conduct, which would foster the development, deployment and use of reliable AI systems also beyond the imperative rules. This process is to be facilitated by the Commission, as provided for in Article 69 of the AI Act proposal.

With respect to the governance framework, the audience underscored, for instance, that it should reflect the fact that AI and its ethical implications are evolutive and cross-boundary. In this respect, the proposed Regulation provides for a two-layer governance framework, i.e. at EU and national level. This will ensure the harmonised implementation of the AI Act across the EU, while leaving room for Member States to designate competent bodies as they deem fit for the effective implementation. At the EU level, the envisaged AI Board will gather cross-country expertise, serve as a competence centre for national authorities, and ensure good coordination among them. An important role is foreseen also for external experts, advisory groups and other third parties, which may inform the activities of the Board to an appropriate extent.

The Commission paid particular attention to the evolutive nature of the technology. Therefore, the proposal provides for a flexible mechanism to update the list of high-risk standalone AI systems laid down in Annex III through delegated acts, in order to respond to relevant technological and market developments. This shall be done in accordance with specific criteria for assessment of the level of risk that these AI systems may cause. According to the proposal, the need for amendments of Annex III is to be assessed by the Commission once a year. Furthermore, an important possibility for dynamic regulatory responses is provided with respect to the definition of ‘AI system’, whereby the list of specific techniques and approaches used for development of AI systems as defined in Article 3(1) may be updated in a similar fashion through delegated acts. We believe that these measures will make the legal framework future-proof and thus capable of taking into account the swift developments in the field of AI.

A number of questions revolved around the relationship between theAI Act and the General Data Protection Regulation (GDPR), which has been carefully assessed by the Commission. Notably, the two acts focus on different matters and complement each other: while the primary aim of the GDPR is to enhance individuals' control and rights over their personal data, the raison d’être of the AI Act is to make sure that AI systems are trustworthy and compliant with harmonized rules across the internal market. The AI Act therefore treats AI systems as products and its enforcement system is based on the EU product legislation framework - well known by the Member States - with ex-ante conformity assessment and ex-post market surveillance activities. Furthermore, the scope of the AI Act is limited to AI systems, which represent a subset of automated decision making systems as regulated by the GDPR, rather than an equivalent category. It should also be noted that insofar as AI systems are treated as products by the AI Act, the Commission did not deem necessary or appropriate to provide them with legal personality. Finally, with respect to the question on anonymous training of AI systems, the GDPR provides for a requirement for minimisation of personal data with a view to preserving persons' privacy. This would also be applicable in relation to AI systems processing personal data. At present, truly anonymous training of AI systems is very difficult to achieve in practice with the existing state of the art technique. However, research in privacy preserving technologies is advancing and the Commission follows with attention the promising developments to this end.

With regard to the advice of the European Data Protection Supervisor and the European Data Protection Board, national data protection authorities to be designated as supervisory authorities under the AI Act, we note that the provision of Article 59, paragraph 2, entitles Member States to determine which national competent authority shall be empowered with this function.

The proposed AI Act should not only build upon but also smoothly integrate into the existing regulatory framework in different sectors. In particular, a number of questions addressed the interaction of the proposal with the Medical Device Regulation and the possible additional burden on authorities and the industry. In this respect, we note that the envisaged harmonised requirements for AI systems will complement the existing essential requirements for medical devices and form an integrated and comprehensive framework. This will harmonise practices and create a level playing field in relation to AI systems in the medical device sector and beyond. Furthermore, the AI Act relies on the enforcement system laid out under the Medical Device Regulation (MDR), and compliance of relevant AI systems with the new requirements will be assessed in the framework of the already existing conformity assessment procedure under the MDR. Consequently, the market surveillance authorities designated under the MDR will also be responsible for the enforcement of the applicable rules of the AI Act. This avoids duplication and facilitates a smooth interplay between the two legal acts. Concerning the suggestion that the supervisory authorities under the Medical Device Regulation should be designated as supervisory authorities under the AI Act, we note that the national supervisory authorities envisaged in the AI Act will be responsible for the overall horizontal implementation of the Act and their designation will be at the discretion of the Member States. In any event, these national supervisory authorities may well rely on the competent authorities under the MDR for the appropriate supervision of AI applications used in the medical sector.

With regard to standardisation activities, the Commission explores ways to support the work of relevant organisations at a horizontal level, including in relation to AI. At the same time, in our work on AI we have noted that a significant part of diverse stakeholders has already proactively engaged in planning and producing relevant standards over the next few years. In any event, the standardisation process in the framework of the AI Act will see the active involvement and monitoring of different concerned services so as to ensure the smooth integration of future harmonised standards into the existing framework on medical devices. Last but not least, the database for stand-alone high-risk AI systems envisaged in the AI Act aims to facilitate the monitoring work of the Commission and national authorities by requiring providers to register relevant data (see Annex VIII) before placing AI systems on the market or otherwise putting them into service. Simultaneously, it will increase transparency towards the wider public, which would also have access to the register. The Commission has opted for a step-by-step approach so that ambitions and timelines can correspond; this database would thus be significantly simpler than EUDAMED insofar as it would be a registration database only.

With regard to the question on assessment of AI systems intended to be applied in the public sector, we note that when a high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities, or EU institutions, bodies, or agencies, the market surveillance authorities specified in Article 63, paragraphs 5 and 6, shall act as notified bodies. This will avoid a situation where third parties would evaluate these potentially sensitive AI solutions.

As far as the question about access to datasets is concerned, market surveillance authorities may in the context of their activities access the training, validation, and testing datasets used by providers of high-risk AI systems, but the proposed AI Act does not provide for any right of these authorities to grant third parties with such access.

With respect to the question of labeling, Article 49 provides that a CE marking of conformity shall be affixed to each compliant high-risk AI system once it has successfully undergone the applicable conformity assessment procedure. If the nature of the high-risk AI system does not allow this, the CE marking shall be affixed to the packaging or to the accompanying documentation, as appropriate. This will effectively allow high-risk AI systems to move freely within the internal market.

Both the panelists and the audience rightly pointed out the importance of skills in terms of generating trust in AI among the wider public and in terms of ensuring an effective governance framework. Purposeful action is necessary to increase AI awareness and knowledge, in addition to sound legislation. In the Coordinated Plan on AI 2021 review, the Commission pledges to fulfill its mission to facilitate, together with the Member States, the acquisition of broad computing skills through all levels of education and to further the understanding of AI for all citizens. Notably, all Member States that have already adopted their national AI strategies integrated the skills dimension into their AI policies. Measures proposed include, for example, reforms of the formal education systems to strengthen the teaching of computational thinking, computing, and AI foundations at primary or secondary school, as well as initiatives to adopt lifelong learning and reskilling policies. Education systems will of course also play a paramount role in managing the impact of AI and robotisation on jobs by enabling the workforce to take on a different type of new activities (for an overview and analysis of the challenges related to the world of work and possible policy responses, see the study “The future of work? Work of the future!” available here). Furthermore, the Commission’s Digital Education Action plan for the period 2021-2027 integrates specific actions for the improvement of AI skills into the larger context of promoting digital skills, such as the development of ethical guidelines on AI and data usage in teaching and learning for educators, which will be accompanied by a training programme for researchers and students on the ethical aspects of AI.

Adequate competence and understanding of relevant AI technologies, data, and data computing on the side of national authorities is undoubtedly essential in terms of ensuring the effective implementation of the AI Act. This is a key legal requirement for national competent authorities and notified bodies, laid down in the AI Act to ensure that they will to be able to carry out their work with the necessary professional integrity and competence. In order to facilitate compliance, the Commission and the Member States are in the process of establishing a network of European Digital Innovation Hubs and Testing and Experimentation Facilities, which, together with the AI-on-demand platform, may provide technical and scientific support and thus contribute to the sound implementation of the AI Act. More information about these initiatives can be found in the Coordinated Plan 2021 review.

Last but not least, attendants inquired about strategic aspects of European AI policy in an international context. Notably, the aim pursued by the AI Act is to make sure that AI systems are used responsibly in Europe, rather than to try to control AI systems, in order to generate the appropriate trust in the wider public and foster uptake of the technology. In this respect, the place of development of AI systems is irrelevant as long as these AI systems (or their outputs) are used in the EU. The focus of the AI Act is on those AI systems that present high risks for safety and fundamental rights of persons. The Regulation is therefore likely to have a limited impact on B2B applications insofar as the risks it aims to mitigate are less likely to emerge in this domain.

 

 

Étiquettes
HLCAI AIAct AIGovernance AITrust