EU AI Act compliance, built by engineers
Be ready when your enterprise customer asks for your AI Act technical file.
Fixed-price, two weeks. Written from the artefacts already in your ML stack — not from a policy template.
Most ML teams already have what they need: validation runs in MLflow, slice metrics in W&B, traces in Langfuse. We translate it into the documentation the regulation expects.
No sales pitch. We confirm classification and gaps on the call, free.
First, the regulation in 60 seconds
Three obligations. The rest is detail.
The EU AI Act doesn't regulate all AI. It singles out specific use cases — HR screening, credit scoring, school admissions, medical triage, biometric ID, a few others — as high-risk. If your product is one of them, the law gives you three jobs.
01 / Classify
Confirm where you sit
Show, in writing, whether each AI system you ship is high-risk under Annex III — and on which paragraph. Roughly half the work in a Diagnostic engagement.
02 / Document
Keep an evidence pack
Maintain a technical file describing the system, the data, what you tested, and what you found. The structure is fixed by Annex IV; the content already lives in your MLflow, W&B and Langfuse.
03 / Monitor
Watch it after release
Run a written post-market monitoring plan and report serious incidents on Article 73's clocks. If you have on-call, drift alarms and an incident process, this is mostly paperwork on top.
Fines reach €35M or 7 % of global turnover at the top end. A defensible technical file is what spares you from that exposure — and that file is the work we do. Most clients still arrive here for the procurement reason. That comes next.
What's already in your customer's RFP
Procurement teams are faster than the regulator.
Most providers think they have until 2 August 2026. They don't. EU enterprise buyers are already adding AI Act questions to RFPs and DDQs. The Digital Omnibus may delay enforcement, but it won't delay procurement. The artefacts you need are the same either way.
For any AI system used in the engagement, please provide:
- (a) Annex III classification under the EU AI Act
- (b) Annex IV technical documentation, redacted as necessary for confidentiality
- (c) post-market monitoring plan per Article 72
- (d) most recent serious incident report log
Lifted from a real Q1 2026 vendor questionnaire, paraphrased for confidentiality.
The regulator's calendar
When each piece becomes enforceable.
The August 2026 date is the one most providers know. The earlier dates apply too — prohibitions are already live, and the GPAI rules came into force last summer.
Enforcement dates
Feb 2025
Prohibitions live
Aug 2025
GPAI rules live
Aug 2026
Annex III high-risk obligations liveYou are here
Aug 2027
Annex I product-route live
Penalties under Art. 99
Up to €15M or 3 % of global turnover for high-risk obligations · Up to €35M or 7 % for prohibited AI.
Who we work with
Four product shapes. If yours is here, you're in scope.
These four use cases concentrate the bulk of our work because the regulator listed them by name. If your product is one of them, the low-risk exemption rarely saves you, your enterprise buyers will start asking, and the documentation burden is real.
HR tech
Candidate matching · sourcing · screening · performance reviews
Profiling kicks in the moment you rank or filter candidates. The low-risk exemption is rarely available because the output materially affects who gets hired.
In scope · classified · Annex III §4
Insurtech
Life & health pricing · underwriting · claims triage · emergency dispatch
Life and health are explicitly listed. Property and motor are typically out of scope unless the model touches health or life signals.
In scope · classified · Annex III §5(c)
Edtech
Admissions ranking · automated assessment · proctoring · course allocation
Anything that scores, ranks, or routes a learner — especially across institutional thresholds — sits inside the scope.
In scope · classified · Annex III §3
Lending & credit
Creditworthiness · credit scoring · BNPL underwriting · automated loan decisions
Even a model that only proposes a decision a human signs off counts. GDPR Article 22 already partly covered this; the AI Act adds the documentation layer.
In scope · classified · Annex III §5(b)
Outside these four? Health, biometric ID, critical infrastructure and access to public services are also high-risk under Annex III — same engagement pattern, slightly different scope analysis.
What if you do nothing
The penalty isn't a fine. It's a sales cycle.
Most teams brace for the regulator. The regulator is slow. Your enterprise customers' security and procurement teams are not — and they're already in motion. The realistic Q3 2026 picture, in order of likelihood:
Most likely
Your deals stall a quarter
An AI/ML section appears in your prospect's vendor security review. You can't answer it. The deal goes back to procurement for a second look. You lose the quarter, sometimes the deal.
Likely
Your renewals get a new gate
Existing customers add an AI Act addendum at renewal. "Provide your Annex IV technical documentation upon request." If you can't, it becomes a contractual exception, then a renewal risk.
Possible
An EU customer triggers an audit
A complaint or incident inside a deployer's organisation pulls you into a national authority's review. You spend a quarter answering questions instead of shipping product.
Last
Then, eventually, fines
Up to €15 M or 3 % of global turnover for high-risk obligations, €35 M or 7 % for prohibited AI. By the time this becomes the binding constraint, you've already lost more in stalled pipeline.
None of this requires the August 2026 date to land. Procurement teams set their own calendar — and they're 9–12 months ahead of the regulator.
Three ways to start
Pick the engagement that fits where you are.
Diagnostic
€3,500 · 3–5 days
Still asking — does this even apply to us?
After: you know if you're in scope, where the gaps are, and what the next 90 days should cost.
What we deliver
- A signed memo confirming whether each of up to three AI systems is high-risk, and on which Annex III paragraphAnnex III classification memo
- A one-page snapshot of where you stand on the three core obligationsAnnex IV gap snapshot
- A 30-minute readout to your leadership team — answers questions liveDiagnostic readout
Readiness Sprint
€9,500 · 2–3 weeks
You know you're in scope. You need a credible plan you can show your board and your customers.
After: you can answer the AI/ML section of any enterprise vendor review — and you have a 26-week plan that says how you close the gaps.
What we deliver
- A full gap analysis of your evidence vs what the regulation asks for, with severity ratingsAnnex IV gap analysis
- A reusable framework for the risk reviews regulators (and big-customer security teams) will ask forArticle 9 RMS template
- A written plan for monitoring the AI in production and what to do when it driftsArticle 72 post-market monitoring plan
- A 26-week roadmap with named owners and engineering effort, ending at the August 2026 deadlineRoadmap to 2 Aug 2026
Programme
from €30,000 · 8–12 weeks
You want to ship to enterprise EU buyers without procurement objecting.
After: the technical file, monitoring, and incident process are built into your engineering workflow — not a binder you update at audit time.
What we deliver
- Everything in the Sprint, plus —Sprint deliverables
- Risk-management and quality systems wired into your existing engineering processRMS + QMS operational
- Drafted EU Declaration of Conformity, ready to signEU Declaration of Conformity
- Support registering the systems in the EU databaseEU database registration
- Incident-reporting workflow on the 15/10/2-day clocks, integrated with your on-callArticle 73 incident workflow
- Engineering and PM team training so the documentation stays currentTeam training
Prices are floors, not ceilings. We work with two clients at a time to keep quality high — check the calendar before assuming availability.
How the work actually flows
Annex IV doesn't ask for policies. It asks for evidence.
Most of what the regulation asks for already exists somewhere in your stack — it just isn't shaped the way the regulator expects. Here's a representative slice of the mapping, and what changes hands during a Sprint.
What the regulation asks for
Where you already have it
Cite
Validation and testing logs, signed and dated
MLflow / W&B run history, exported with checksums
Annex IV §1(g)Validation procedures, metrics by demographic subgroup
Slice-level metrics in your training pipeline (fairlearn, evalml)
Annex IV §1(h)Post-market monitoring plan and operational evidence
LangSmith / Langfuse traces, drift alarms, and incident logs
Annex IV §9 / Art. 72Serious incident reporting (15/10/2-day clocks)
Wired into your alerting + on-call workflow, not a binder
Art. 73
View the full Annex IV mapping (all 30 rows) → (coming soon — included in every Sprint)
We work the same way every time, and we keep the technical file current as your models change. The Sprint produces the first version of the file; the Programme builds the workflow that keeps it alive.
What makes us different
Three things you might be choosing between. Here's where we sit.
AI Act compliance is a translation problem. The regulation is written in legal language; the evidence lives in engineering systems. Most firms can do one side. We do both.
vs. Big 4 / management consulting
Capgemini · PwC · Deloitte AI risk practices
They have brand and scale. They are slow, expensive, and the senior team rarely touches the actual deliverable. We are faster, cheaper, and the person on the kickoff call is the person writing the technical file.
vs. AI governance platforms
Credo AI · Holistic AI · Trustible
They sell software with policy packs. The customer still has to do the work; the platform organises it. We do the work. We're complementary — many clients will use both.
vs. GDPR / DPO consultancies pivoting to AI Act
DPO Consulting · GDPR Local · VeraSafe · Dynamic Comply
They have the legal frame but not the ML chops. They produce policies, not test logs. We sit where the artefacts actually live — in MLflow, W&B, Langfuse — and write the technical file from there.
Why this exists
Why this exists
Most AI Act consulting today is GRC-shaped: questionnaires, policy templates, attestation workflows. That works for an article-by-article checklist.
But Annex IV doesn't ask for policies. It asks for evidence. Test logs by subgroup. Validation procedures with metrics. Predetermined changes. Signs the system actually works as documented.
Those artefacts already exist — in your MLflow runs, your W&B experiments, your Langfuse traces. The job isn't to invent compliance documentation from scratch. It's to translate what's already there into the form Annex IV asks for, and to keep doing it as your models change.
Questions we get
What people ask before they book.
What is the EU AI Act, in plain English?
How do I know if my AI is "high-risk"?
What does an Annex IV technical file actually look like?
Are we a "provider" or a "deployer"?
What if we just… don't do anything?
We're not in the EU. Does the AI Act apply to us?
The Digital Omnibus might delay the deadline. Should we wait?
Can't a law firm do this?
Annex IV §2(g) test logs from your MLflow runs. We do both. Most engagements need both layers — and we're happy to work alongside your existing counsel.How is this different from Credo AI or Holistic AI?
What does a Sprint actually produce?
Annex III classification memo per system, an Annex IV gap analysis with severity ratings, a 26-week remediation roadmap, and templates for your Art. 9 risk-management system and Art. 72 post-market monitoring plan. Two-week turnaround.Do you handle GDPR too?
Article 22 GDPR, profiling, DPIAs for high-risk systems, data governance. For pure GDPR work without an AI dimension, we'll refer you.Do we need a notified body?
Annex VI — no notified body required. Remote biometric ID is the main exception. We confirm this in the first call.