
The EU AI Act Engineering Checklist: What Your Dev Team Needs to Do by August 2026
The EU AI Act's compliance deadline for high-risk systems is August 2, 2026. That's less than six months away. If your product uses AI in any meaningful way and you serve European users, this affects you.
Most AI Act content is written by lawyers for lawyers. This post is different. We're going to tell you what your engineering team actually needs to build.
Does This Apply to You?
First, the honest answer: most MVPs and internal tools probably fall under "minimal risk" and need almost nothing. But if your AI system does any of the following, you're likely in the high-risk category:
- Makes decisions about people (hiring, credit scoring, insurance)
- Interacts with users in a way they might mistake for human (chatbots without disclosure)
- Processes biometric data
- Operates in critical infrastructure, education, or law enforcement
If your AI system influences a decision that affects a person's rights, access, or opportunities — it's probably high-risk. If it generates marketing copy or summarizes documents — it's probably minimal risk.
The Engineering Checklist
1. Risk Classification — Make It a Product Decision
Before writing any compliance code, classify your AI features. This isn't a legal exercise — it's a product decision that shapes your architecture.
// types/ai-risk.ts
export enum AIRiskLevel {
MINIMAL = "minimal", // Spam filters, autocomplete, recommendations
LIMITED = "limited", // Chatbots, emotion recognition, deepfakes
HIGH = "high", // Credit scoring, hiring tools, medical devices
UNACCEPTABLE = "unacceptable" // Social scoring, real-time biometric surveillance
}
export interface AIFeature {
id: string;
name: string;
description: string;
riskLevel: AIRiskLevel;
model: string;
version: string;
dataCategories: string[];
humanOversightRequired: boolean;
lastAssessmentDate: string;
}
Document every AI feature in your product with its risk classification. This isn't busywork — it's the foundation everything else builds on.
2. Transparency — Tell Users When They're Talking to AI
This one is straightforward and applies to almost everyone. If your system uses AI in a way that interacts with users, you must disclose it.
// components/AIChatDisclosure.tsx
export function AIChatDisclosure() {
return (
<div className="text-sm text-muted-foreground border-b pb-2 mb-4">
This conversation is powered by AI. Responses are generated
automatically and may not always be accurate. A human agent
is available upon request.
</div>
);
}
Not just chatbots. AI-generated content, automated decisions, synthetic media — all need disclosure. The rule is simple: if a person might reasonably believe they're interacting with a human or human-created content, you must clarify.
3. Model Cards — Documentation as Code
High-risk systems need technical documentation that describes how the model works, what data it was trained on, and what its known limitations are. The best way to maintain this is as code — not as a PDF that gets outdated in a week.
# ai/model-cards/credit-scoring-v2.yaml
model:
name: "CreditScorePredictor"
version: "2.1.0"
type: "gradient-boosted classifier"
provider: "internal"
last_trained: "2026-01-15"
last_evaluated: "2026-02-01"
purpose:
description: "Predicts creditworthiness based on financial history"
intended_use: "Assisting human credit officers in loan decisions"
out_of_scope: "Autonomous loan approval without human review"
data:
training_set: "anonymized_credit_data_v3"
size: "2.4M records"
demographics: "EU residents, 18-75, balanced gender/age"
known_gaps: "Limited data for self-employed individuals"
performance:
accuracy: 0.89
false_positive_rate: 0.07
false_negative_rate: 0.04
fairness_audit_date: "2026-01-20"
demographic_parity_gap: 0.03
limitations:
- "Lower accuracy for applicants with <2 years of credit history"
- "Does not account for non-traditional income sources"
- "Performance degrades on applications from outside the EU"
human_oversight:
required: true
mechanism: "All predictions reviewed by credit officer before decision"
override_rate: "12% of predictions overridden in last quarter"
Store these in your repo. Version them with git. Review them in PRs. This is your audit trail.
4. Logging — Build an Audit Trail
Every AI decision needs to be traceable. Not just "what did the model output" but "what input did it receive, what version of the model ran, and what did the human do with the result."
// lib/ai-audit-log.ts
interface AIAuditEntry {
id: string;
timestamp: string;
featureId: string;
modelVersion: string;
input: Record<string, unknown>; // sanitized — no raw PII
output: Record<string, unknown>;
confidence: number;
humanDecision?: string;
humanOverride: boolean;
userId?: string; // the operator, not the subject
}
export async function logAIDecision(entry: AIAuditEntry) {
// Store in append-only table — never delete audit logs
await prisma.aiAuditLog.create({
data: {
...entry,
input: JSON.stringify(entry.input),
output: JSON.stringify(entry.output),
},
});
}
The audit log should contain enough to reconstruct the decision, but not raw personal data. Hash or anonymize user-identifying information. You're creating a compliance record, not a surveillance database.
5. Bias Testing — Automate It in CI
The AI Act requires that high-risk systems are tested for bias across protected characteristics. Don't do this manually. Automate it.
// tests/ai/bias-check.test.ts
describe("Credit scoring model bias check", () => {
const testCases = generateDemographicTestSet({
ages: [25, 35, 45, 55, 65],
genders: ["male", "female", "non-binary"],
income_brackets: ["low", "medium", "high"],
});
it("should have demographic parity gap < 5%", async () => {
const results = await runModelOnTestSet(testCases);
const parityGap = calculateDemographicParity(results);
expect(parityGap).toBeLessThan(0.05);
});
it("should have equal false positive rates across groups", async () => {
const results = await runModelOnTestSet(testCases);
const fprGap = calculateFPRGap(results);
expect(fprGap).toBeLessThan(0.03);
});
});
Run these in your CI pipeline. If bias thresholds are exceeded, the build fails. This isn't optional for high-risk systems — it's a legal requirement.
6. Human Oversight — Build the Override
High-risk systems require a human in the loop. This doesn't mean a human rubber-stamps every decision — it means a human can intervene, override, and shut down the system.
Build three things:
- Override UI — a way for operators to change the AI's decision
- Kill switch — a way to disable the AI feature entirely without redeploying
- Escalation path — when the model's confidence is below a threshold, route to a human automatically
// Feature flag for AI kill switch
const creditScoringEnabled = await featureFlag("ai.credit-scoring.enabled");
if (!creditScoringEnabled) {
return { decision: "MANUAL_REVIEW", reason: "AI scoring disabled" };
}
const prediction = await model.predict(applicantData);
// Low confidence → automatic escalation
if (prediction.confidence < 0.7) {
await escalateToHuman(prediction, applicantData);
return { decision: "ESCALATED", reason: "Low confidence" };
}
7. Data Governance — Know What You're Training On
If you're fine-tuning models or using RAG with your own data, you need to document:
- Where the training data came from
- Whether consent was obtained for AI training specifically
- How data subjects can request deletion
- How you handle data from minors
This is where the AI Act and GDPR overlap. If you already have solid GDPR practices, you're halfway there.
The Compliance Timeline
| Deadline | Requirement | Engineering Work |
|---|---|---|
| Feb 2025 (done) | Prohibited practices banned | Remove any social scoring or manipulative AI |
| Aug 2025 (done) | GPAI providers must comply | If you build foundation models (you probably don't) |
| Aug 2026 | High-risk system compliance | Everything in this checklist |
| Aug 2027 | Full enforcement for all categories | Remaining system updates |
Why This Is Actually an Advantage
Here's the contrarian take: the AI Act is a competitive moat.
US companies shipping AI products into Europe will need to bolt compliance on after the fact. EU companies building it from day one will have:
- Better documentation (because it's required)
- More reliable models (because bias testing is automated)
- Higher user trust (because transparency is built in)
- Lower risk of fines (up to 7% of global revenue)
We've integrated AI compliance into our MVP development process. It adds about 2–3 days to a 4-week sprint. That's a small price for being able to ship legally in the world's largest regulated market.
The Bottom Line
The AI Act isn't going away. The deadline isn't moving. And "we'll figure it out later" is how you end up with a 7% revenue fine and a compliance team that hates your engineering team.
Start with risk classification. Build the audit trail. Automate bias testing. Document your models as code.
Six months is enough time if you start now. It's not enough if you start in July.
Need help building AI features that are EU AI Act compliant from day one? See our AI & LLM Integration services.

