AI regulation: A pro-innovation approach – EU vs UK

Date: DD May 2023

In this article, the writers compare the United Kingdom’s (“UK”) plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s (“EU”) proposed Artificial Intelligence Act (the “EU AI Act”).

 

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

 

AI currently delivers broad societal benefits, from medical advances to mitigating climate change.  As an example, an AI technology developed by DeepMind, a UK-based business, can predict the structure of almost every protein known to science.  Government frameworks consider the role of regulation in creating the environment for AI to flourish.  AI technologies have not yet reach their full potential.  Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

 

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach.  In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies.  Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation.  On both fronts responding to risk and building public trust are important drivers for regulation.  Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies.  Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of AI to flourish.  Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

 

Similarities exist in terms of the overall aims.  As shown in the table below, the core similarities revolve around growth, safety and economic prosperity.

EU AI ACT

UK Approach

Ensure that AI systems placed on the market and used are safe and respect existing law on fundamental rights and Union Values.

Drive growth and prosperity by boosting innovation, investment and public trust to harness the opportunities and benefits that AI technologies present.

Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.

Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.

Ensure legal certainty to facilitate investment and innovation in AI.

-

Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

-

 

What are the problems being tackled?

 

Again, similarities exist in terms of a common focus: the end-user.  AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected.  Protecting them at all costs seems to be the presiding theme.

EU AI ACT

UK Approach

Safety risks.  Increased risks to safety and security of citizens caused by use of AI systems.

Market failures.  A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.

Fundamental rights risk.  Use of AI systems pose increased risk of violations of citizens’ fundamental rights and Union values.

Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.

Enforcement.  Competent authorities do not have the powers and/or procedural framework to ensure compliance of AI use with fundamental rights and safety.

-

Legal uncertainty.  Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from development and using the technology.

-

Mistrust.  Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.

-

Fragmentation.  Fragmented measures create obstacles for cross-border Ai single market and threaten Union’s digital sovereignty.

-

 

 

What are the differences in policy options?

 

A variety of options have been considered by the respective policymakers.  On the face of it pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate.  The EU sets the standard with Option 3.

EU AI ACT (Decided)

UK Approach (In Process)

Option 1 – EU Voluntary labelling scheme - An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.

Option 0 - Do nothing option - Assume the EU delivers the AI Act as drafted April 2021. The UK makes no regulatory changes regarding AI.

Option 2 – Ad-hoc sectoral approach - Ad hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.

Option 1 - Delegate to existing regulators, guided by non-statutory advisory principles - Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.

Option 3 – Horizontal risk-based act on AI - A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for determination of high- risk (risk-based).

Option 2 - Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) - Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.

Option 3+ – Codes of conduct - Option 3 + code of conducts. Option 3 + industry-led codes of conduct for non-high-risk AI.

Option 3 - Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.

Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/or gradation (all risks covered).

-

 

What are the estimated direct compliance costs to firms?

 

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad.  Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in scope of the framework.  These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

 

Key finding: Cost of compliance for HRS highest under Option 3

Option

Option 0

Option 1

Option 2

Option 3

% of businesses that provide high-risk systems (HRS)

-

8.1%

8.1%

8.1%

Cost of compliance per HRS

-

£3,698

£3,698

£36,981

% of businesses that AI systems that interact with natural persons (non-HRS)

-

39.0%

39.0%

39.0%

Cost of compliance per non-HRS

-

£330

£330

£330

Assumed number of AI systems per AI business (2020)

-

Small – 2

Medium – 5

Large - 10

Assumed number of AI systems per AI adopting business (2020)

-

Small – 2

Medium – 5

Large - 10

 

 

EU AI Act: Total compliance cost of the five requirements for each AI product

 

Key finding: Information provision represents the highest cost incurred by firms.

Administrative Activity

Total Minutes

Total Admin Cost (hourly rate = €32)

Total Cost

Training Data

€5,180.5

-

-

Documents & Record Keeping

€2,231

-

-

Information Provision

€6,800

-

-

Human Oversight

€1,260

-

-

Robustness and Accuracy

€4,750

-

-

Total

€20,581.5

€10,976.8

€29,276.8

 

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK.  Lower costs don’t confer a less rigid approach.  Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours.  In practice, firms’ are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

 

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology.  Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.  

This is certainly applicable for Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems.  There are three key ideas for EU decision-makers to consider from the UK Approach.

 

 

AI Assurance techniques and technical standards

 

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance.  To do this, providers of high-risk AI systems provides must established techniques, procedures and systematic actions to be used for the development, quality control and quality assurance.  The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI, so that potential harms at all levels of society are identified and documented. 

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate and communicate the trustworthiness of AI systems across the development and deployment life cycle.  These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government plans to launch a Portfolio of AI assurance techniques in Spring 2023.  This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide common understanding across assurance providers.  Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach.  Similarities exist in terms of the stage of development. 

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards.  In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners.  The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub.  These technical standards may demonstrate firms’ compliance with the EU AI Act.

 

A harmonised vocabulary

 

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to foundations of AI regulation.  While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary.  As shown below, identification of where both initiatives are in agreement, and where they diverge has been undertaken.  

 

EU AI Act

UK Approach

Shared

Accountability

Safety

Privacy

Transparency

Fairness

Divergent

Data Governance

Diversity

Environmental and Social Well-Being

Human Agency and Oversight

Technical Robustness

Non-Discrimination

Governance

Security

Robustness

Explainability

Contestability

Redress

 

How AI & Partners can help

 

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act.  Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

Website: https://www.ai-and-partners.com/

EU versus UK
Diverging approaches to AI regulation
Oznake
AI ai regulation AI4EU