Annex III §5(b)
EU AI Act compliance for lending and credit
Credit decisions are where the AI Act, the GDPR, and the prudential framework all meet. Creditworthiness assessment, credit scoring, BNPL underwriting, and automated loan decisions all sit inside Annex III §5(b) — and the technical file has to read coherently to three different supervisors at once.
Lending sits at an unusual intersection. Annex III §5(b) brings AI Act high-risk classification; Article 22 GDPR already gave borrowers a right against solely-automated decisions with significant effect, and the case law has tightened around credit scoring (Schufa, ECJ C-634/21). On top of that, banks and licensed lenders carry CRD/CRR model-governance obligations and EBA guidelines on internal models that overlap heavily with Annex IV. BNPL providers usually do not carry the prudential overhead — but the directive that brought them inside consumer credit, plus the AI Act, plus Article 22, is enough.
Annex III §5(b) covers AI systems intended to evaluate the creditworthiness of natural persons or to establish their credit score, with a narrow carve-out for systems used solely for fraud detection. The carve-out is read tightly — a fraud-detection model that also feeds back into the credit decision is in scope. The natural-persons restriction matters: pure SME credit-decision models targeting legal entities are out, though the line gets fuzzy when the SME has a personal guarantor. BNPL underwriting is squarely in. The interesting edge case is second-look models that take a rejected applicant from a primary lender and re-underwrite — the second-look is also creditworthiness, also in scope, even though the model never sees the primary decision.
Bank-grade buyer questions follow CRD/CRR vocabulary: independent model validation, model performance backtest, concentration and benchmarking. The AI Act overlay adds fairness across protected categories, treatment of vulnerable consumers, and the Article 86 right to an explanation for the borrower. BNPL platforms are getting a lighter touch, but their downstream bank partners are starting to push the AI Act questionnaire upstream because their own CRD compliance now references AI Act Annex IV evidence. The right answer is to ship the technical file once and let it satisfy both audiences.
The most common gap is model documentation that exists for CRD purposes but is not in Annex IV format. Most established lenders have validation reports, backtests, and an internal-model approval pack. The gap is structural — Annex IV asks for things the validation report does not split out: predetermined changes (Annex IV §1(f)), system architecture interaction (§1(c)), and a post-market monitoring plan that maps to Article 72. The Sprint produces an Annex IV-shaped overlay that references the existing validation pack rather than duplicating it. The second gap is the Article 22 GDPR meaningful-information layer: borrowers are entitled to a meaningful explanation, and most lenders ship a generic decline reason that does not survive Schufa-style challenge.
Lending evidence has a CRD spine you can build on. Validation reports → Annex IV §1(h). Internal-audit trail and four-eyes principle → Art. 12 logging and Art. 14 human oversight evidence. Decision logs → already exist in your loan-origination system; need a redacted export with reason codes. Drift monitoring on default rates and approval rates by demographic slice → typically the gap; we wire it into your existing risk-monitoring cadence. For BNPL, where the prudential spine does not exist, the gap is wider: we usually start by formalising the existing model-monitoring spreadsheet into a documented, automated pipeline before mapping it to the technical file.
Diagnostic for lending AI providers
We work alongside your model-risk team and, where it exists, your CRD/CRR documentation pack. The Diagnostic confirms Annex III §5(b) classification across up to three models, identifies the Article 22 GDPR explanation gap, and produces a one-page snapshot for your bank partners. Fixed price.