Understanding the EU AI Act and its impact on the UK
The UK government estimates that artificial intelligence (AI) will be worth around £1 trillion to the UK economy by 2035, and with its potential advances and efficiencies at the forefront of nearly every industry’s agenda, mapping out how that technology can be used is essential.
The European Union has been the first out of the blocks to put a legislative framework in place to cover AI, to provide a common legislative playing field whilst ensuring that there remains a human element to otherwise artificial means. Whilst the new UK government awaits to launch its consultation on how it should legislate, we take a look at the EU AI Act, how it impacts UK businesses, and what is currently in place in the UK.
Who does the EU AI Act affect?
Similar to the impact of GDPR, the EU AI Act has an extraterritorial implication, so that it affects both those present in the EU and those based outside of the EU who look to trade in the EU and/or have their output produced by an AI system used within the EU. Equally, if a user of an AI system is based in the EU, then they will have rights against the AI system provider, even if that provider is based outside of the EU.
In short: The EU AI Act has a very broad reach and will directly impact UK businesses.
If you operate within the UK but sell into EU and/or have users based in the EU, with an AI-assisted tool then the EU AI Act will impact you and you will therefore need to be aware of the level of risk associated with your tool and the dates by which you need to be compliant.
What does the EU AI Act seek to do?
The EU AI Act looks to implement a tiered, risk-based approach to regulating AI systems being deployed for use in the EU. Each tier will determine the rules to apply to the AI system, based on its societal impact, as follows:
Tier 1: Minimal/no risk
- Regulation: None.
- Examples: Spam filters, AI-enabled video games.
- Action: No real action, albeit transparency is best practice.
Tier 2: Limited risk
- Regulation: Users need to be aware that they are dealing with AI to make genuine informed decisions via watermarking requirements.
- Examples: Chatbots, Audio/Video for use in deepfakes.
- Action: Make it transparent that the tool and/or its results are created by using AI.
Tier 3: High risk (i.e. where decisions and actions have profound impact as on people’s lives)
- Regulation: Quality, transparency, human supervision, safety obligations.
- Examples: Critical infrastructure, education, employment, law enforcement, democratic processes, driverless cars, surgeries or diagnosis.
- Action: AI system providers will need to conformity assessments, sign up to a register, and sign a declaration for the product to go on the market.
Tier 4: Unacceptable risk
- Regulation: Outright banned activity.
- Examples: AI that creates social scoring systems, subliminal or manipulative techniques to distort behaviours, compiling ID databases based on auto-facial recognition.
- Action: Avoid operating in these areas and veer on the side of caution for any solutions that near them.
The EU AI Act also sets requirements for “general purposes AI” for tools like ChatGPT, Dall-E etc. (GPAI). Here the provider of the GPAI must provide a series of evidence-based and protective measures such as: draw up technical documentation, ensuring that information can be provided for downstream providers, respecting copyright laws, and publishing a summary for what constitutes the dataset and AI model.
What are the immediate implications of the EU AI Act?
The EU AI Act came into force on 1 August 2024, and started the clock ticking on numerous provisions, with prohibited practice becoming effective from February 2025, codes of practice need to be finalised by May 2025, and all EU member states having until 2nd Aug 2025 to appoint national authorities to manage their respective state’s supervision and enforcement of AI regulation, which is also the date by which most of the GPAI obligations come into effect. All remaining provisions of the EU AI Act will then come into full effect in August 2026.
In order to implement and enforce the new laws, the EU AI Act established numerous bodies within Europe to help implement and enforce the law overall, with the AI Office, European Artificial Intelligence Board, Advisory Forum, and Scientific Panel of Independent Experts to manage the EU-level. These bodies will serve the national authorities created within each member state.
As with similar EU-wide legislation, the potential fines for corporate breaches of the legislation are linked to the turnover of the organisation, with the most serious of breaches reaching fines of up to 7% of annual turnover.
The EU AI Act has a sliding scale and timeline for when your business and solution need to be compliant based on the tier of risk that you sit in. For those businesses selling into the EU and/or making their digital products available to EU citizens, then actions need to be undertaken quickly, with new processes and checkpoints enacted on a business-wide level. The exact frameworks required are unclear, however it is estimated that the cost of running such compliance processes could ultimately cost SMEs between 1-3% of their turnover to implement.
It has been a telling sign that the largest players in the AI game have embraced the measures however, with OpenAI having signed up to the three core commitments of the EU AI Pact – a voluntary initiative to align organisations with the EU AI Act prior to its full implementation.
What’s in place in the UK now?
There are currently no express general legislative acts in the UK that address AI. The topic was under discussion under the UK’s previous Conservative government, however both the AI (Regulation) Bill and AI (Employment and Regulation) Bills have stalled due to the general election and resultant change in government. A private members’ bill to deal with protecting the public sector was submitted before the House of Lords in September under the Public Authority Algorithmic and Automated Decision-Making Bill.
As part of Labour’s manifesto for the general election, it published that it would introduce “binding regulation on the handful of companies developing the most powerful AI models”. During the following King’s Speech on 17 July 2024, amongst the 40 or so new bills to be submitted by the new government, it came as a surprise that an express UK AI Bill/Act was not mentioned, however The King declared that the Labour government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
Though not expressly covering AI, the King’s Speech did reference the introduction of the Digital Information Smart Data Bill, the Cyber Security and Resilience Bill, and the Product Safety and Metrology Bill, each of which will inevitably have to deal with aspects of AI, and it is anticipated that upcoming bills are also expected to touch on AI when dealing with their respective sectors.
In September, the UK signed its first legally binding treaty governing safe use of AI with the Council of Europe, under the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.
What’s coming in UK AI Act?
We can’t know for sure just yet, but it is predicted that the UK AI Act will be significantly more targeted than the general framework launched by EU AI Act.
That said, Prime Minister Keir Starmer has referenced the need for a regulator to exist, and with Labour’s intentions expressly noting the desire to grow the AI sector, then it would be wise to align with (and not diverge from) the EU AI Act. Striking the balance right between innovation and restriction will be key.
In November 2023, the UK Conservative government founded the AI Safety Institute (AISI) as a research organisation within the Department of Science, Innovation and technology with its key focus being ‘systemic AI safety’, and it widely reported that under Labour’s plans the AISI will be converted into an independent body to set the standards for AI development. The ICO will also continue to monitor AI in line with data protection laws.
On 26 July 2024 the DSIT launched consultation on the AI Opportunities Action Plan for how the government can best utilise AI. Further, on 6th November 2024, the Department for Science, Innovation & Technology launched a consultation on their AI Management Essentials (“AIME”) framework. This is a self-assessment tool for organisations to create a baseline for their AI management. The consultation is due to close in January 2025, and whilst it will not become a legal requirement, it will set out the basics for businesses to comply with – similar to how the Cyber Essentials scheme set out minimum standards for cybersecurity back in 2014.
On 17 December 2024, the Government published a list of the AI technologies that it uses within its processes, and so it is clear that transparency will be an important aspect of the legislation to come.
As with recent legislation in this area however, a deep dive into your own processes, recording your digital assets, and reviewing your contract provisions will be essential to be at the forefront of compliance.
For further information, please email Mark Hughes or Philip Bowers or call 0151 906 1000.