[wpseo_breadcrumb]

How Is the United Kingdom Tackling AI Regulation?

As governments around the world grapple with the ‘promise and perils’ of artificial intelligence, efforts to design and implement regulations and policies in the UK to ensure safe and trustworthy AI systems are rapidly evolving.

Different sovereign jurisdictions will adopt slightly different approaches to AI regulation. On one end of the spectrum, there’s a clear intention to adopt pro-innovation policy, while on the other, there’s a more cautious approach that will demand rigorous governance and regulation. Each government will aim to establish what they deem is an appropriate balance between these two extremes.

Efforts at regulating AI in the UK currently differ from those in the EU. Whereas the EU is implementing a comprehensive risk-based legal framework to govern the development and use of AI, the UK has sought to establish a cross-sector outcome-based framework that leans more towards the pro-innovation approach. “Every jurisdiction is looking into what they need to do to reign in the potential abuse of AI without dampening the enthusiasm for innovation,” Dr. Meghan Anzelc, Chief Data and Analytics Officer at Three Arc Advisory told us in a recent Diligent podcast about AI ethics.

As boards, CEOs, CCOs, CTOs and legal and compliance leaders prepare for compliance with new global AI regulations, it’s crucial for decision-makers to properly understand the diverse national and industry-specific AI regulations in the jurisdictions where they operate or plan to expand.

Here, we discuss the state of UK AI regulation and compare the UK’s approach to that of other global jurisdictions.

Understanding the EU AI Act is a critical first step for UK companies

While the European Union’s AI Act is not a regulation specific to the UK, it does cover any UK-based company doing business in the EU and using an AI system in the EU or making AI systems available in the EU. Understanding the EU AI Act is therefore crucial for UK companies, especially those operating within or engaging with the European market.

UK organisations providing AI-driven products or services in the EU must ensure compliance with the AI Act to avoid penalties, which could include fines or operational bans and reputational risk. Additionally, aligning with EU requirements will help maintain competitiveness and ensure products are attractive to EU consumers and businesses.

The AI Act aims to strengthen Europe’s position as a global hub of excellence in the field of AI. It sets a common framework for the use and supply of AI systems in the EU and establishes a risk-based classification of those AI systems. The final text was agreed upon in March 2024, formally approved by the European Council in May, and published in the E.U. Official Journal in July. The Act will come into force on 1 August 2024. Compliance with prohibited uses guidance is required by February 2025, general-purpose AI providers need to be compliant by August 2025, and the Act will be fully applicable for providers (developers) and deployers (users) by August 2026.

Enforcement penalties can be severe and include fines ranging from €35 million or 7% of worldwide annual turnover (whichever is higher) for using prohibited AI systems and fines of up to €15 million or 1.5% for — by way of example — failing to comply with the requirements for high-risk AI systems, or for companies outside the EU, like those in the UK, who fail to appoint an authorised representative in EU before making a high-risk AI system available.

Calling the legislation “really smart,” Anzelc points out that “it takes a broader view. How can we make sure that AI doesn’t fall into prohibitive practices but leave the door open for a lot of innovation?”

The Act has been compared to the introduction of traffic lights in the early days of driving, declaring the rules of the road while aiming to accommodate more vehicles in the future. In much the same way that the EU’s GDPR inspired global data protection legislation, it is very likely that the AI Act will set an initial highwater mark and standards for subsequent global AI regulations.

How are UK AI regulations different?

Just as there are multiple ways to direct and control traffic, there are multiple ways to regulate AI. Comprehensive, overarching rules are one way to keep traffic flowing in the right direction. As noted above, the EU intends to govern through one standalone regulator (the European AI Board) and introduce one all-encompassing piece of regulation in the form of the AI Act.

In contrast, the UK has taken a more decentralised approach to AI regulation under the previous Conservative government.

Building on the National AI Strategy originally published in 2021, the UK government set out its ambitions in relation to AI governance in a white paper entitled “A pro-innovation approach to AI regulation” in March 2023. The approach introduced geopolitical themes and revolved around an intention to move away from the EU’s approach to better support innovation while providing a framework to ensure risks are identified and addressed. It called out how a ‘heavy-handed and rigid approach’ would stifle innovation and slow AI adoption.

Our ebook, mastering regulatory compliance at midyear: essential 2024 guidance for directors and executives, is trending!

5 core principles of AI regulation in the UK

The UK aims to leverage the expertise of existing regulators who, the government argued, are best placed to understand the risks in their sectors and take a proportionate approach to regulating AI. The goal is to make responsible innovation easier and to attract AI businesses to the UK as a global trusted leader in AI. Think of it as a city council providing driver guidance for a specific region, like a steep grade warning in the mountains or a high-occupancy rule for a commuter-heavy freeway.

The approach sets out five principles for regulators to interpret and apply within their remit.

These are:

  • Safety, security & robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The Department for Science Innovation & Technology (DSIT) has subsequently released guidance for regulators offering considerations that regulators may wish to incorporate when developing tools and guidance for implementing the UK’s approach to AI regulation. The DSIT has started to establish a central function to support the UK’s sectoral regulators in fostering harmonisation across regulators, promoting information sharing and analysing and reviewing potential gaps in existing regulatory powers and remits.

UK sectoral regulators are now focused on issuing AI-specific guidance to regulated entities in line with the government’s ongoing voluntary guidance and the principles-based approach.

The Artificial Intelligence Bill has passed through the House of Lords and has now been sent to the Commons. This was introduced as a Private Members Bill back in November 2023 with the aim of putting AI regulatory principles on a statutory footing and establish a central UK AI Authority responsible for overseeing the regulatory approach to AI.

Data protection and AI in the UK

Because many of the data sets used to train AI models and the AI use cases that organisations aim to implement tend to involve personal data and the personalisation of services, the Information Commissioner’s Office (ICO), which is the supervisory authority for data protection in the UK, is poised to become a crucial regulator for AI governance.

Global data protection regulations, led by GDPR, have already sought to govern what is referred to as automated decision making. Under GDPR, data subjects already have the right, with exceptions, not to be subject to a decision based solely on automated processing, including profiling, where that produces legal effects or significantly affects impacts the data subject. This introduces a very interesting dynamic, where organisations cannot look at compliance with local AI regulation in isolation. They also need to simultaneously adhere to local data protection legislation.

The ICO, which has a wonderful reputation for providing outstanding data protection guidance materials that are used globally, has also released ongoing AI and data protection guidance. The guidance focuses on topics such as fairness, accountability, lawfulness and transparency in AI.

A recent consultation focuses on how the accuracy principle of data protection applies to the outputs of Generative AI models and the impact that accurate training data has on the output. This is because inaccurate training data can lead to inaccurate results and end up having negative consequences for data subjects.

Inaccurate data would leave a developer or the deployer in breach of the accuracy principle that is a fundamental principle for global data protection regimes and at risk of enforcement from a well-established and very active regulator.

The way ahead with UK AI regulations

As highlighted in the King’s Speech on 17 July 2024, the election of a Labour government could herald a potential move towards more of a centralised and prescriptive approach to AI regulation. The government’s commitment to fostering innovation while ensuring robust regulatory oversight was clearly articulated. For instance, the King mentioned, ‘We are dedicated to advancing our technological frontiers, ensuring that AI development aligns with national interests and public welfare.’ This statement underscores the strategic importance of AI in the UK’s future economic and social landscapes.

Additionally, the Labour Party’s Manifesto set out a strategy that would support the AI sector and an intention to create a new Regulatory Innovation Office that will equip regulators to update regulation and support the development of new technologies more effectively. Watch this space.

Looking ahead, directors, CEO’s, legal compliance and risk leaders should monitor regulatory developments in their specific line of work, such as cybersecurity or intellectual property. However, it’s also crucial to consider the five overarching principles as guidance on what UK regulators deem important and what they will be pursuing in well-governed AI operations.

Finally, in times of regulatory uncertainty and ambiguity, it is always prudent to base decisions on principles that will stand the test of time. As organisations consider new use cases for AI and the need to balance those initiatives with evolving regulatory risks, being guided by good principles will enable boardrooms and leadership teams to look back in the review mirror with more comfort.

Are you interested in how the Diligent platform can bring your organisation to the next level of compliance?