EU AI Act compliance, built by engineers

Be ready when your enterprise customer asks for your AI Act technical file.

Fixed-price, two weeks. Written from the artefacts already in your ML stack — not from a policy template.

Most ML teams already have what they need: validation runs in MLflow, slice metrics in W&B, traces in Langfuse. We translate it into the documentation the regulation expects.

Book a 30-min callSee a sample memo → (coming soon)

No sales pitch. We confirm classification and gaps on the call, free.

First, the regulation in 60 seconds

Three obligations. The rest is detail.

The EU AI Act doesn't regulate all AI. It singles out specific use cases — HR screening, credit scoring, school admissions, medical triage, biometric ID, a few others — as high-risk. If your product is one of them, the law gives you three jobs.

  1. 01 / Classify

    Confirm where you sit

    Show, in writing, whether each AI system you ship is high-risk under Annex III — and on which paragraph. Roughly half the work in a Diagnostic engagement.

  2. 02 / Document

    Keep an evidence pack

    Maintain a technical file describing the system, the data, what you tested, and what you found. The structure is fixed by Annex IV; the content already lives in your MLflow, W&B and Langfuse.

  3. 03 / Monitor

    Watch it after release

    Run a written post-market monitoring plan and report serious incidents on Article 73's clocks. If you have on-call, drift alarms and an incident process, this is mostly paperwork on top.

Fines reach €35M or 7 % of global turnover at the top end. A defensible technical file is what spares you from that exposure — and that file is the work we do. Most clients still arrive here for the procurement reason. That comes next.

What's already in your customer's RFP

Procurement teams are faster than the regulator.

Most providers think they have until 2 August 2026. They don't. EU enterprise buyers are already adding AI Act questions to RFPs and DDQs. The Digital Omnibus may delay enforcement, but it won't delay procurement. The artefacts you need are the same either way.

Representative DDQ §47.3 — what enterprise procurement is starting to ask

For any AI system used in the engagement, please provide:

  1. (a) Annex III classification under the EU AI Act
  2. (b) Annex IV technical documentation, redacted as necessary for confidentiality
  3. (c) post-market monitoring plan per Article 72
  4. (d) most recent serious incident report log

Lifted from a real Q1 2026 vendor questionnaire, paraphrased for confidentiality.

The regulator's calendar

When each piece becomes enforceable.

The August 2026 date is the one most providers know. The earlier dates apply too — prohibitions are already live, and the GPAI rules came into force last summer.

Enforcement dates

  1. Feb 2025

    Prohibitions live

  2. Aug 2025

    GPAI rules live

  3. Aug 2026

    Annex III high-risk obligations liveYou are here

  4. Aug 2027

    Annex I product-route live

Penalties under Art. 99

Up to €15M or 3 % of global turnover for high-risk obligations · Up to €35M or 7 % for prohibited AI.

Who we work with

Four product shapes. If yours is here, you're in scope.

These four use cases concentrate the bulk of our work because the regulator listed them by name. If your product is one of them, the low-risk exemption rarely saves you, your enterprise buyers will start asking, and the documentation burden is real.

Outside these four? Health, biometric ID, critical infrastructure and access to public services are also high-risk under Annex III — same engagement pattern, slightly different scope analysis.

What if you do nothing

The penalty isn't a fine. It's a sales cycle.

Most teams brace for the regulator. The regulator is slow. Your enterprise customers' security and procurement teams are not — and they're already in motion. The realistic Q3 2026 picture, in order of likelihood:

  1. Most likely

    Your deals stall a quarter

    An AI/ML section appears in your prospect's vendor security review. You can't answer it. The deal goes back to procurement for a second look. You lose the quarter, sometimes the deal.

  2. Likely

    Your renewals get a new gate

    Existing customers add an AI Act addendum at renewal. "Provide your Annex IV technical documentation upon request." If you can't, it becomes a contractual exception, then a renewal risk.

  3. Possible

    An EU customer triggers an audit

    A complaint or incident inside a deployer's organisation pulls you into a national authority's review. You spend a quarter answering questions instead of shipping product.

  4. Last

    Then, eventually, fines

    Up to €15 M or 3 % of global turnover for high-risk obligations, €35 M or 7 % for prohibited AI. By the time this becomes the binding constraint, you've already lost more in stalled pipeline.

None of this requires the August 2026 date to land. Procurement teams set their own calendar — and they're 9–12 months ahead of the regulator.

Three ways to start

Pick the engagement that fits where you are.

Diagnostic

€3,500 · 3–5 days

Still asking — does this even apply to us?

After: you know if you're in scope, where the gaps are, and what the next 90 days should cost.

What we deliver

  • A signed memo confirming whether each of up to three AI systems is high-risk, and on which Annex III paragraphAnnex III classification memo
  • A one-page snapshot of where you stand on the three core obligationsAnnex IV gap snapshot
  • A 30-minute readout to your leadership team — answers questions liveDiagnostic readout
Most popular

Readiness Sprint

€9,500 · 2–3 weeks

You know you're in scope. You need a credible plan you can show your board and your customers.

After: you can answer the AI/ML section of any enterprise vendor review — and you have a 26-week plan that says how you close the gaps.

What we deliver

  • A full gap analysis of your evidence vs what the regulation asks for, with severity ratingsAnnex IV gap analysis
  • A reusable framework for the risk reviews regulators (and big-customer security teams) will ask forArticle 9 RMS template
  • A written plan for monitoring the AI in production and what to do when it driftsArticle 72 post-market monitoring plan
  • A 26-week roadmap with named owners and engineering effort, ending at the August 2026 deadlineRoadmap to 2 Aug 2026

Programme

from €30,000 · 8–12 weeks

You want to ship to enterprise EU buyers without procurement objecting.

After: the technical file, monitoring, and incident process are built into your engineering workflow — not a binder you update at audit time.

What we deliver

  • Everything in the Sprint, plus —Sprint deliverables
  • Risk-management and quality systems wired into your existing engineering processRMS + QMS operational
  • Drafted EU Declaration of Conformity, ready to signEU Declaration of Conformity
  • Support registering the systems in the EU databaseEU database registration
  • Incident-reporting workflow on the 15/10/2-day clocks, integrated with your on-callArticle 73 incident workflow
  • Engineering and PM team training so the documentation stays currentTeam training

Prices are floors, not ceilings. We work with two clients at a time to keep quality high — check the calendar before assuming availability.

How the work actually flows

Annex IV doesn't ask for policies. It asks for evidence.

Most of what the regulation asks for already exists somewhere in your stack — it just isn't shaped the way the regulator expects. Here's a representative slice of the mapping, and what changes hands during a Sprint.

  • Validation and testing logs, signed and dated

    MLflow / W&B run history, exported with checksums

    Annex IV §1(g)

  • Validation procedures, metrics by demographic subgroup

    Slice-level metrics in your training pipeline (fairlearn, evalml)

    Annex IV §1(h)

  • Post-market monitoring plan and operational evidence

    LangSmith / Langfuse traces, drift alarms, and incident logs

    Annex IV §9 / Art. 72

  • Serious incident reporting (15/10/2-day clocks)

    Wired into your alerting + on-call workflow, not a binder

    Art. 73

View the full Annex IV mapping (all 30 rows) → (coming soon — included in every Sprint)

We work the same way every time, and we keep the technical file current as your models change. The Sprint produces the first version of the file; the Programme builds the workflow that keeps it alive.

What makes us different

Three things you might be choosing between. Here's where we sit.

AI Act compliance is a translation problem. The regulation is written in legal language; the evidence lives in engineering systems. Most firms can do one side. We do both.

vs. Big 4 / management consulting

Capgemini · PwC · Deloitte AI risk practices

They have brand and scale. They are slow, expensive, and the senior team rarely touches the actual deliverable. We are faster, cheaper, and the person on the kickoff call is the person writing the technical file.

vs. AI governance platforms

Credo AI · Holistic AI · Trustible

They sell software with policy packs. The customer still has to do the work; the platform organises it. We do the work. We're complementary — many clients will use both.

vs. GDPR / DPO consultancies pivoting to AI Act

DPO Consulting · GDPR Local · VeraSafe · Dynamic Comply

They have the legal frame but not the ML chops. They produce policies, not test logs. We sit where the artefacts actually live — in MLflow, W&B, Langfuse — and write the technical file from there.

Why this exists

Why this exists

Most AI Act consulting today is GRC-shaped: questionnaires, policy templates, attestation workflows. That works for an article-by-article checklist.

But Annex IV doesn't ask for policies. It asks for evidence. Test logs by subgroup. Validation procedures with metrics. Predetermined changes. Signs the system actually works as documented.

Those artefacts already exist — in your MLflow runs, your W&B experiments, your Langfuse traces. The job isn't to invent compliance documentation from scratch. It's to translate what's already there into the form Annex IV asks for, and to keep doing it as your models change.

Questions we get

What people ask before they book.

What is the EU AI Act, in plain English?

It's a 2024 EU regulation (Regulation (EU) 2024/1689) that tiers AI by risk. Most AI is unregulated. A handful of uses are banned (social scoring, real-time biometric ID in public). A specific list — HR screening, credit scoring, school admissions, medical triage, biometric ID and a few others — is "high-risk" and carries documentation, monitoring and incident-reporting duties. That's the bucket nearly all our clients land in.

How do I know if my AI is "high-risk"?

Two questions, in order. (1) Does your AI fall inside one of the eight Annex III categories — HR, education, biometric ID, critical infrastructure, public services, law enforcement, migration, or justice? If no, you're not high-risk under Annex III. If yes, go to (2). (2) Does your specific use case do narrow procedural work, prepare a human review, or detect deviation patterns? Most ranking and scoring AI does NOT qualify for that exemption. We confirm in 30 minutes on the call.

What does an Annex IV technical file actually look like?

It's a documentation pack — typically 30–80 pages plus appendices — covering: system description, intended purpose, training and test data lineage, model architecture and validation results, risk management, post-market monitoring plan, and human-oversight design. You keep it on file and produce it on request from a national authority or an enterprise customer's procurement team. It's not a single PDF — it's a maintained set of documents pointing back to artefacts that already live in your stack.

Are we a "provider" or a "deployer"?

If you build the AI and put it on the market under your name (most SaaS, most AI startups), you're a provider. If you only use someone else's AI inside your operations (e.g. a recruiter using a third-party screening tool), you're a deployer — lighter obligations. The grey zone: if you fine-tune a foundation model and ship the result, you become a provider for that fine-tuned system.

What if we just… don't do anything?

In order of likelihood: deals stall in vendor security review, renewals add an AI-Act addendum you can't satisfy, and eventually a customer-side incident pulls you into a national-authority review. Fines come last and require an actual investigation. The realistic 12-month risk for a Series B/C AI vendor isn't a fine — it's losing a quarter of pipeline because procurement teams added a column you can't fill in.

We're not in the EU. Does the AI Act apply to us?

Probably yes. The Act applies to any provider whose AI system's output is used in the EU, regardless of where the provider is located (Regulation (EU) 2024/1689, Article 2). If you have EU customers, deployers, or users, you are in scope.

The Digital Omnibus might delay the deadline. Should we wait?

No. The artefacts you need are the same regardless of when they're enforced, and your enterprise customers are already asking for them in procurement. Deadlines slip; procurement requirements don't.

Can't a law firm do this?

Law firms write the legal interpretation. They don't write Annex IV §2(g) test logs from your MLflow runs. We do both. Most engagements need both layers — and we're happy to work alongside your existing counsel.

How is this different from Credo AI or Holistic AI?

Those are software platforms. We're consultants. Our deliverable is a defensible technical file, not a dashboard. We integrate with whatever tooling you already use.

What does a Sprint actually produce?

A signed Annex III classification memo per system, an Annex IV gap analysis with severity ratings, a 26-week remediation roadmap, and templates for your Art. 9 risk-management system and Art. 72 post-market monitoring plan. Two-week turnaround.

Do you handle GDPR too?

Where it overlaps with the AI Act — Article 22 GDPR, profiling, DPIAs for high-risk systems, data governance. For pure GDPR work without an AI dimension, we'll refer you.

Do we need a notified body?

Most Annex III systems use internal conformity assessment under Annex VI — no notified body required. Remote biometric ID is the main exception. We confirm this in the first call.

How quickly can you start?

Diagnostic engagements start within a week. Sprints within two weeks of signed SOW. We work with two clients at a time to keep quality high, so check the calendar before assuming availability.