It has been a year since the first wave of the EU AI Act started biting. The banned-practices list went live in February 2025, the general-purpose AI rules followed in August 2025, and the heavyweight high-risk obligations are scheduled for August 2026, with embedded medical-device AI getting a longer runway under Article 6(1). I run engineering across Eir Tec, MediVox and a handful of Skjld Labs projects, all of which sit squarely in the high-risk healthcare bucket. This post is what the rules have actually meant for us, not what the slide decks promised.
This is not a legal explainer. Talk to a notified body and a regulatory lawyer for the binding version. What follows is the operational reality.
The phased timeline, briefly
By most published guidance, the dates that matter are:
- 2 February 2025: prohibited practices and AI literacy obligations apply.
- 2 August 2025: governance rules and obligations on general-purpose AI model providers apply.
- 2 August 2026: the bulk of high-risk obligations apply for stand-alone systems.
- 2 August 2027: high-risk AI embedded in products that already require third-party conformity assessment, including most Class IIb/III MDR devices and Class C/D IVDs, fall under the full regime.
There has also been Council-level discussion in early 2026 about pushing some of these dates further. I am planning against the original calendar and treating any extension as upside.
What "high-risk" means when you actually build clinical AI
If your model influences a clinical decision, scores a patient, triages a referral, transcribes a consultation into the record, or behaves as a safety component of a medical device, you are almost certainly inside Annex III or Article 6(1). For Eir Tec's clinical decision-support work and MediVox's clinical voice pipeline, that question was answered before we wrote the first line of code.
The practical consequence: you inherit the AI Act's documentation regime on top of MDR or IVDR, not instead of it. The good news is that for Class IIb/III and Class C/D devices, the same notified body can cover both conformity assessments. The bad news is that the underlying evidence base is now larger and more opinionated than it was eighteen months ago.
The documentation that suddenly matters
The shift I have felt most is the reweighting of work inside the team. A year ago, model performance dominated review meetings. Now an equal share of time goes to:
- Data governance — provenance, consent basis, representativeness, bias testing, version-pinned training and evaluation sets.
- Transparency — what the system does, what it doesn't, intended purpose, known limitations, residual risks, in language a clinician will actually read.
- Human oversight — not a checkbox, but a documented protocol for when a clinician must override, how the override is logged, and how those overrides feed back into monitoring.
- Post-market monitoring — drift, incident reporting, performance regression, retraining triggers, the whole loop.
- Conformity assessment — the technical file, the quality management system, and the paper trail that lets a notified body sign the box.
None of this is new to medical-device people. What is new is that AI-specific obligations are now welded onto the same chassis.
Living in two jurisdictions at once
Eir Tec is a UK company. MediVox is Norwegian. The UK is not in the EU AI Act. It runs a pro-innovation, sector-led framework with the MHRA front and centre for healthcare AI, and a more comprehensive AI Bill has been signposted for 2026 but is not yet law. Norway, via the EEA, will adopt the AI Act.
In practice, we build to the stricter standard and document once. UK GDPR and the Data Protection Act 2018 cover the personal-data layer on the British side; GDPR covers it on the EEA side; the clinical layer sits under MDR or UK MDR 2002 as amended. Trying to maintain two parallel evidence bases is a tax nobody has time for.
What we actually changed in the platform
Some of these patterns we adopted before the Act forced our hand, because audit-by-default is just good architecture for healthcare. Others were a direct response to the regime taking shape:
- Model cards for every shipped model, regenerated on every release, version-pinned to the eval suite that produced them.
- Eval suites as first-class artefacts, with subgroup performance, calibration, and known-failure cases tracked over time.
- Per-inference audit records — input hash, model version, prompt or feature vector, output, confidence, downstream clinician action — written immutably and queryable.
- Explainability surfaced in the UI, not buried in a research notebook. If a clinician can't see why, they shouldn't have to trust it.
- Single-tenant data isolation in Azure UK South for the UK estate, with no cross-tenant inference paths.
- WorkOS for clinician identity and SSO, so authentication, session, and access events live in one auditable place.
- Anonymised retention by default, with identifiable data living only as long as the clinical workflow demands.
An honest take
The rules themselves are reasonable. Most of them describe what a serious clinical-AI team should already be doing. The volume of paperwork required to prove you are doing them is another matter. I have seen small teams, with genuinely good products, eat six to nine months of roadmap on documentation work they had not budgeted for.
The teams that designed for audit on day one are eating well right now. The teams that did not are quietly rebuilding under deadline pressure.
One year in, my view is unchanged from when the trilogue closed: the moat is no longer the model. Open weights are catching the closed frontier on most clinical sub-tasks, and the gap will keep narrowing. The moat is whether your platform can be audited on a Tuesday afternoon by someone who has never seen it before, and whether the answer is yes without a two-week scramble.
That is what we have been building toward. By August 2026, we will find out who else has.