EU AI Act and recruitment: what really changes in 2026 (complete guide for UK & EU employers)

Christophe HébertChristophe Hébert·May 15, 2026

A recruiter today is about to use AI to automate CV scoring, semantic matching or candidate sourcing. The good news: under EU law, this is still allowed — under several conditions. Recruitment falls squarely into the "high-risk" category. And the timeline has just shifted. Here's what you really need to know, jargon-free — including what it means for UK organisations.

The essentials in 30 seconds

  • Regulation (EU) 2024/1689 — the AI Act — classifies AI systems used for recruitment and the selection of people as high-risk (Annex III, point 4).
  • High-risk obligations were due to apply from 2 August 2026 but the political agreement on the Digital Omnibus (Council + Parliament, 7 May 2026) defers application to 2 December 2027.
  • Deferred is not cancelled — and the agreement is not yet published in the EU Official Journal. Until it is, the original 2 August 2026 date remains legally binding.
  • The software vendor is typically the provider; the recruitment agency or employer using it is the deployer. Both carry distinct obligations.
  • UK organisations are in scope if their AI outputs are used to recruit in the EU (extraterritorial reach, modelled on GDPR). UK domestic AI legislation is still in preparation.

1. What is the AI Act — and who is concerned in recruitment?

The AI Act is the EU's first horizontal AI framework. It follows a risk-based logic: prohibited practices, high-risk systems with strict obligations, transparency obligations for some other systems.

It applies to providers placing AI systems on the EU market and to deployers established in the EU using them (art. 2). In an AI-assisted recruitment chain, that covers both the software vendor and the agency or HR team using it. GDPR continues to apply in parallel whenever personal data is processed (≈ always, in recruitment).

UK specifics: the EU AI Act has explicit extraterritorial reach. A UK-based recruitment platform whose outputs are used to evaluate EU candidates falls in scope. A UK agency recruiting for EU subsidiaries is concerned too. UK domestic AI legislation (a possible AI Bill in the 2026 King's Speech) is still being drafted — the UK currently follows a sector-led, principles-based approach under the ICO and sector regulators.

2. Why recruitment is classified "high-risk"

Annex III, point 4, explicitly covers AI systems intended:

  • for recruitment or selection of people — in particular targeted job advertising, analysing and filtering applications and evaluating candidates;
  • to make decisions on working conditions, promotion or termination, and to monitor/evaluate performance.

The rationale: such systems have a significant impact on people's career prospects and rights, and may reproduce historical biases (gender, age, origin, disability…).

3. The real timeline after the Digital Omnibus (May 2026)

| Provision | Application | | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | | Prohibited practices (art. 5) + AI literacy (art. 4) | Already in force since 2 February 2025 | | General-purpose AI models (art. 51 et seq.) | Since 2 August 2025 | | Annex III high-risk systems (including recruitment) | 2 August 2026 → deferred to 2 December 2027 (political agreement on Digital Omnibus, 7 May 2026; formal adoption pending) | | High-risk linked to regulated products (Annex I) | 2 August 2028 |

In short: product obligations for high-risk recruitment software are pushed back by ~16 months, assuming the December 2027 date is confirmed by formal adoption.

What is NOT deferred

The ban on certain practices and the AI literacy duty (art. 4 & 5 — training and awareness of teams) remain in force today. The deferral is not a withdrawal or a rollback.

4. Provider or deployer: who carries what?

The AI Act distinguishes two roles (art. 3):

  • Provider — develops the AI system and places it on the market under its own name. Typically the software vendor. Carries the "product" obligations (see section 5).
  • Deployer — uses the system under its own authority. Typically the recruitment agency or HR function. Carries usage obligations: effective human oversight, transparency, informing workers and their representatives before deployment, informing affected candidates, lawful use.

When a vendor embeds a third-party AI model (e.g. a general-purpose model), it also acts as a downstream provider and must in turn rely on documentation passed up the chain.

5. Seven key obligations for an AI recruitment tool

For high-risk systems, Chapter III imposes the following on the provider, at the applicable date:

  1. Risk management system — continuous, documented, throughout the lifecycle (art. 9).
  2. Data governance and bias mitigation — relevant, representative training data; examination and mitigation of discriminatory bias (art. 10).
  3. Technical documentation prepared before market placement and kept up to date (art. 11, Annex IV).
  4. Automatic logging of events over the system's lifetime (art. 12).
  5. Transparency and instructions for use — clear capabilities, limitations, accuracy, human oversight measures (art. 13).
  6. Effective human oversight — AI assists, it does not replace; the human can override or ignore the output (art. 14).
  7. Accuracy, robustness and cybersecurity — including against data poisoning and adversarial attacks (art. 15).

Plus, on the provider side: a quality management system (art. 17), EU declaration of conformity and CE marking (art. 47-48), registration in the EU database (art. 49), post-market monitoring (art. 72) and serious-incident reporting (art. 73). Good news: for recruitment, conformity assessment is done through internal control (Annex VI) — no notified body required.

6. Prohibited AI practices — already in force

Regardless of the high-risk timeline, article 5 has been prohibiting the following since February 2025:

  • Emotion inference in a workplace or interview context (except medical/safety reasons) — this rules out emotion-analysis of video interviews or voice;
  • Biometric categorisation to infer origin, political opinions, trade union membership, beliefs, sex life or sexual orientation;
  • Social scoring and manipulative techniques.

A serious recruitment tool must be able to confirm it uses none of these practices. This is the first question to ask a vendor.

7. What should a UK or EU recruitment team do?

An actionable checklist, starting today:

  • Map the AI features used (filtering, scoring, matching, message generation).
  • Demand from the vendor: confirmation that no prohibited practice is used; instructions for use; a compliance roadmap.
  • Guarantee real human oversight: the final decision remains the recruiter's, traceable.
  • Inform your teams, employee representatives and affected candidates.
  • Coordinate with GDPR: DPIA for profiling, transparent information (art. 13/14 GDPR), retention periods, exercise of rights.
  • Train users: AI literacy is already a duty.

SME relief: the Digital Omnibus extends SME facilities (simplified documentation, modulated penalties) to organisations with fewer than 750 employees — most recruitment agencies and staffing firms qualify.

8. Marvin Recruiter's approach

At Marvin we treat recruitment AI as decision support, never a replacement for human judgement. Matching, candidate scoring and outreach automation are designed as assistants for the recruiter — the final decision stays with the recruiter. We do not deploy emotion recognition or biometric categorisation on applications. We've mapped the integrity of our obligations against the AI Act and run an internal compliance programme — based on transparent design, documentation that can be shared with your legal and IT teams, and a product built for human oversight.

We don't claim "100% compliance" — that label doesn't exist; the AI Act has no formal product label. What matters is transparency, documentation you can hand to your DPO/legal/IT, and a product designed around human oversight.

Where this article comes from

Marvin Recruiter is an ATS that natively integrates AI and data analytics in the recruitment workflow. Building this product seriously forced us to understand the AI Act and GDPR in depth — because we're subject to them, and because we want our customers (agencies, staffing firms, in-house HR) to automate without crossing the line. This article synthesises that work: official sources, regulation, delegated acts and the Digital Omnibus.

Informative, not legal advice. It hasn't yet been reviewed by a specialist lawyer. If you plan to base a compliance decision on a specific point, validate it with your DPO or a law firm specialised in digital. If you spot an inaccuracy, write to us — we keep this article updated.

FAQ

Is recruitment really a "high-risk" use of AI?

Yes. Annex III, point 4 explicitly covers systems intended for recruitment, selection, filtering of applications and evaluation of candidates.

Hasn't the deadline been deferred? Do I still need to act?

The Digital Omnibus (7 May 2026) defers high-risk obligations to 2 December 2027. But prohibited practices and AI literacy duties have been in force since February 2025. The topic is live today.

Who is responsible — our agency or the software vendor?

Both. The vendor is the provider (product obligations: documentation, risk management, oversight by design). The agency is the deployer (use obligations: effective oversight, candidate and worker information, GDPR compliance).

Do I need an audit or a notified body?

No. For recruitment (Annex III, point 4), conformity assessment is done via internal control by the provider — no notified body required.

Does this apply to UK organisations?

Yes, if AI outputs are used to recruit candidates in the EU or evaluate EU-based workers. The EU AI Act has extraterritorial reach modelled on GDPR. UK domestic AI legislation is still in preparation.

Penalties for non-compliance?

Up to €35M or 7% of worldwide turnover for prohibited practices; up to €15M or 3% for breaches of provider/deployer or transparency obligations.

Does the AI Act replace GDPR?

No. It complements GDPR. Candidate profiling also engages GDPR: lawful basis, transparency, DPIA, rights. See our GDPR & recruitment guide.


Article up to date as of 15 May 2026. Sources: Regulation (EU) 2024/1689; political agreement on the Digital Omnibus, 7 May 2026 (Council of the EU, European Parliament, European Commission); ICO guidance on recruitment data protection. Informative content, not legal advice.

Suggested internal links: Digital Omnibus and recruitment (A3) · CV retention and GDPR (R2) · LinkedIn sourcing and GDPR article 14 (R3) · Request a demo.

Christophe Hébert

Christophe Hébert

CEO and Founder

CEO and founder of Marvin. A former recruiter turned tech entrepreneur, he's building the operating system of modern recruitment.