Compliance checker
Find out if you're in scope of the EU AI Act
Five plain-English questions. We classify your system, list the obligations that apply to your role, and tell you where the evidence usually lives. No signup, no PII collected.
Informational only, not legal advice. The classification logic follows Regulation (EU) 2024/1689 as it stands; the verdict is a starting point for an actual classification memo, not a substitute for one.
Step 1 of 5
What kind of system are we looking at?
The AI Act only applies to AI systems. Software that follows fixed rules is out of scope.
What does this actually mean?
An AI system, in the regulation's words, is a machine-based system that operates with some degree of autonomy and infers from inputs how to generate outputs (predictions, content, recommendations, decisions) that influence environments. Pure rules-based software, a calculator, a sort algorithm, a deterministic eligibility check, is not AI.
Common questions about the EU AI Act
What we wish more people knew before they ran the checker. Five questions, plain English, no signup.
What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the European Union's risk-based law on artificial intelligence. It tiers AI systems into four buckets: prohibited practices, high-risk systems with heavy documentation and oversight obligations, limited-risk systems with transparency notices, and minimal-risk systems with no specific obligations beyond AI literacy. Most production AI in commercial use lands in the minimal-risk bucket. The Act entered into force on 1 August 2024 and its obligations are phased in through 2027.
When does the EU AI Act apply to my business?
The Act applies if you develop, deploy, import, or distribute AI systems on the EU market, including non-EU companies whose AI affects people in the EU. The phased timeline: prohibited practices banned from 2 February 2025, general-purpose AI obligations from 2 August 2025, and high-risk obligations from 2 August 2026. Systems already on the market by 2 August 2026 carry a transitional regime to 2 August 2027 if they are not significantly modified.
What are the fines under the EU AI Act?
Three tiers, all measured against global annual turnover. Prohibited practices: up to €35 million or 7%, whichever is higher. Other high-risk obligations: up to €15 million or 3%. Supplying incorrect or misleading information to authorities: up to €7.5 million or 1%. National authorities apply the fines, supervised by the EU AI Office. SMEs and start-ups face the lower of the two amounts; larger firms face the higher.
Who is responsible under the EU AI Act, the AI provider or the company using the AI?
Both, with different obligations. The provider (the company that develops the AI and places it on the market) carries the technical-file, conformity-assessment, and risk-management obligations. The deployer (the company that uses the AI in its operations) carries transparency, human-oversight, and incident-reporting obligations. A company that substantially modifies a third-party model becomes the provider of the modified model. Importers and distributors carry lighter, mostly verification-focused obligations.
What is a high-risk AI system under the EU AI Act?
A high-risk AI system is one used in a context listed in Annex III of the Act, biometrics, critical infrastructure, education, employment, access to essential public or private services (including credit and life and health insurance), law enforcement, migration, or administration of justice. High-risk classification triggers a technical file (Annex IV), a risk-management system (Article 9), post-market monitoring (Article 72), human oversight (Article 14), and conformity assessment. Most production AI in commercial use is not high-risk.