EU AI Act Compliance for UK Organisations: The Definitive FAQ
Practitioner FAQ for UK GRC Managers on EU AI Act scope, risk classification, enforcement timeline, penalties and where to start.
The EU AI Act is now in force and its obligations reach beyond the EU’s borders. UK organisations placing AI systems on the EU market, or whose AI outputs are used within the EU, fall within scope regardless of where the provider is established.
This FAQ answers the questions UK GRC Managers and Practitioners are asking us most often: who is in scope, what risk classifications mean in practice, when key deadlines fall and where to start. It is written for compliance, risk and security leads in regulated mid-sized organisations.
Does the EU AI Act apply to UK companies?
Yes, in many cases it does.
The EU AI Act applies extraterritorially. A UK organisation falls within scope if it places an AI system or general-purpose AI model on the EU market, puts an AI system into service in the EU, or produces AI outputs that are used within the EU. The Act applies regardless of where the provider is established. A UK SaaS vendor selling into Germany, a UK consultancy whose model outputs inform decisions made by an EU customer, and a UK manufacturer embedding AI in products sold across the EU are all potentially in scope. Brexit does not remove this exposure.
UK organisations should treat the EU AI Act as an active compliance obligation rather than a foreign regulation. ## What are the EU AI Act compliance requirements for UK organisations? Obligations depend on two factors: your role under the Act and the risk classification of your AI system. Providers (those who develop and place systems on the market) carry the heaviest obligations. Deployers (those who use AI systems in a professional capacity) carry lighter but material duties.
On top of that, the Act applies a risk-based model with four tiers. High-risk systems require conformity assessment, technical documentation, risk management systems, data governance, human oversight, accuracy and cybersecurity controls, post-market monitoring and registration in the EU database. Limited-risk systems trigger transparency duties.
Prohibited practices, such as social scoring and certain biometric uses, should not be deployed at all. UK GRC Managers should map every in-scope system to its classification and role, then build the documentation and governance stack accordingly. Our AI Act Preparedness hub covers each obligation in detail.
What are the four risk classifications under the EU AI Act?
The Act sets out four tiers.
Prohibited AI practices are banned outright and include manipulative techniques that cause significant harm, e.g. untargeted scraping of facial images and certain real-time biometric identification in public spaces.
High-risk AI systems are permitted but heavily regulated; they cover AI used in critical infrastructure, education, employment, essential services, law enforcement, migration and the administration of justice, alongside AI as a safety component in regulated products.
Limited-risk AI systems, such as chatbots and generative content tools, must meet transparency obligations so users know they are interacting with AI.
Minimal-risk AI faces no specific obligations under the Act, though voluntary codes of conduct are encouraged.
Most enforcement attention, and most of your compliance budget, will sit with the high-risk tier.
When do EU AI Act obligations take effect?
The Act entered into force in August 2024 and phases in over several years. Prohibitions on unacceptable-risk AI practices and AI literacy obligations applied from early 2025. Obligations on general-purpose AI models followed later in 2025. The bulk of high-risk system obligations apply from August 2026, with a further extension to August 2027 for high-risk AI embedded in products already covered by EU product safety legislation.
UK organisations should not wait for the August 2026 date. Building a defensible compliance position, including inventory, classification, documentation and conformity assessment readiness, takes 12 to 18 months for organisations with non-trivial AI estates. Starting now reduces retrofit costs and enforcement exposure.
What are the penalties for non-compliance?
The Act sets tiered administrative fines that exceed GDPR maximums in absolute terms. Breaches of the prohibited-practices rules attract the highest fines, up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher.
Breaches of most other obligations, including high-risk system requirements, attract fines up to EUR 15 million or 3% of worldwide annual turnover. Supplying incorrect, incomplete or misleading information to authorities can be fined up to EUR 7.5 million or 1% of turnover.
SMEs and start-ups face the lower of the two figures rather than the higher.
For UK organisations, enforcement risk also includes loss of EU market access, contractual liability to EU customers who require compliance attestations and reputational damage in regulated sectors.
How does the EU AI Act interact with UK AI regulation?
The UK has taken a principles-based, sector-led approach rather than a single horizontal AI statute.
Existing UK regulators (the ICO, FCA, MHRA, Ofcom and others) apply their remits to AI within their sectors, guided by the government’s pro-innovation framework. This means UK organisations often face two overlapping regimes: domestic regulator expectations and, where they operate in or sell to the EU, the EU AI Act. The two are not equivalent.
EU AI Act obligations are more prescriptive, particularly on conformity assessment and technical documentation. For UK organisations already working toward ISO 42001, the management system foundations transfer; the EU AI Act adds prescriptive deliverables on top. Note that a compliance programme built only around UK regulator guidance will not satisfy EU AI Act requirements for in-scope systems.
GRC Managers should design a single AI governance framework that meets the higher bar of the two regimes for each system.
Can we handle EU AI Act compliance in-house, or do we need external support?
It depends on the size and complexity of your AI estate and the maturity of your existing GRC function. Organisations with a small number of clearly classified AI systems, a strong second line and existing ISO 27001 or ISO 42001 foundations can often manage in-house with targeted legal input. Organisations with larger or unclear estates, high-risk classifications, or limited AI-specific risk and security expertise usually benefit from external support for the initial inventory, classification and gap assessment.
The pattern we see most often is in-house ownership of the ongoing governance programme combined with external help for the initial readiness assessment and high-risk conformity work. Whichever route you choose, accountability must sit clearly within the organisation.
Where should we start?
Start with an AI system inventory. You cannot classify what you have not catalogued, and most organisations underestimate the number of AI systems already in use across shadow procurement, embedded vendor features and internal tooling.
Once the inventory exists, classify each system against the four risk tiers and confirm your role (provider, deployer, importer or distributor) for each. From that point you can scope the obligations that apply, identify documentation gaps and build a remediation plan against the August 2026 deadline.
Treat this as a 12 to 18 month programme rather than a one-off project. Our AI Act Preparedness hub sets out the full readiness pathway, and a 30-minute EU AI Act Readiness Assessment with our team will give you a prioritised view of where the material exposure sits in your estate.
Related resources: AI Act Preparedness hub
Ready to scope your exposure? Schedule a 30-minute EU AI Act Readiness Assessment with our team and leave with a prioritised view of where the material compliance work sits in your estate.
EU AI Act Readiness Assessment
Walk through your AI inventory, identify the systems most likely to fall in scope, and map a commercial readiness pathway against the August 2026 deadline.