On 2 August 2026, the quiet half of the EU AI Act wakes up. National market surveillance authorities start enforcing Article 4 — the AI literacy obligation — and the early reading of the rulebook is clear: most organisations are not ready.
A 2026 readiness analysis cited across the compliance industry found that 78% of enterprises are unprepared for their EU AI Act obligations, and the single most commonly missed duty is Article 4. Only 32% of employees say they have received any formal AI training at work. Meanwhile, Gartner research shows 68% of employees already use AI tools without IT approval.
That is the gap Article 4 is designed to close. It is also the gap most companies will walk into the enforcement window still carrying.
What Does Article 4 of the EU AI Act Require?
Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure a "sufficient level of AI literacy" among staff and anyone operating AI on their behalf. It has applied since 2 February 2025 and becomes enforceable on 2 August 2026 — with penalties of up to €7.5 million or 1% of global annual turnover.
The legal text is deliberately short. Article 4 states that "providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context the AI systems are to be used in."
AI literacy itself is defined in Article 3(56) as the "skills, knowledge and understanding" that let providers, deployers, and affected persons "make an informed deployment of AI systems, as well as gain awareness about the opportunities and risks of AI and possible harm it can cause."
Two dates matter:
| Date | What happens |
|---|---|
| 2 February 2025 | Article 4 becomes applicable — the obligation is already in force |
| 2 August 2026 | National market surveillance authorities begin supervising and enforcing it |
| 2 August 2026 | High-risk AI system rules originally due now, delayed to December 2027 — but Article 4 was explicitly not delayed |
The high-risk delay is the reason many teams assume they have time. They do not. Article 4 needs no supplementary technical guidance to enforce: if staff cannot demonstrate informed, risk-aware use of the AI they operate, you are non-compliant.
Who Does the AI Literacy Obligation Apply To?
Article 4 applies to every provider and deployer of an AI system used in the EU — regardless of risk class, industry, or company size. That includes non-EU companies whose AI outputs are used inside the EU, and it explicitly extends to contractors, vendors, and anyone else operating AI on your behalf.
This is the scope that trips most organisations up. Article 4 is not limited to high-risk AI systems. As Cranium's compliance analysis puts it: even if your organisation only uses minimal-risk AI — chatbots, content generators, translation tools, scheduling assistants — Article 4 still applies.
In practice, that means the obligation covers:
- Providers — anyone who develops or places an AI system on the EU market (builders of internal AI tools count).
- Deployers — anyone using an AI system under their own authority, which includes almost every company using ChatGPT, Microsoft Copilot, Gemini, Claude, or any AI feature embedded in SaaS.
- Staff and "other persons" — employees, contractors, consultants, partners, and service providers acting on your behalf.
- Extraterritorial reach — like the GDPR, the AI Act reaches non-EU companies whose AI output is used inside the EU.
If you've ever heard a leader say "we don't do AI, we just use ChatGPT," they are describing a deployer. Deployers have Article 4 obligations too.
What Counts as "Sufficient" AI Literacy?
"Sufficient" under Article 4 is proportional, not uniform. The EU AI Office expects literacy to match each person's role, experience, and the context they use AI in — so a compliance programme needs tiered content, not a single corporate video.
The European Commission's guidance is explicit: there is no one-size-fits-all standard and no mandated curriculum. Instead, organisations have to consider four things when designing training — general AI understanding, whether the organisation develops or only uses AI, the risk level of the systems deployed, and the staff's existing technical knowledge.
That maps cleanly to a role-based competency model.
| Audience | Core AI literacy focus |
|---|---|
| Executives and board | AI strategy, regulatory landscape, governance, reputational risk, oversight duties |
| People managers | Use-case approval, exception handling, coaching staff, flagging misuse |
| Front-line users | Responsible prompting, evaluating outputs, data handling, recognising hallucinations, when to escalate |
| Technical teams (data, engineering, security) | Model behaviour, bias, evaluation, red-teaming, integration risk, logging and audit |
| Procurement and legal | Vendor due diligence, DPIAs, model cards, contractual safeguards, AI Act classification |
The US Department of Labor's 2026 AI literacy framework — released in February 2026 — reaches the same conclusion from a different direction: fundamentals, automation vs augmentation, generative AI, data ethics, algorithmic bias, and practical prompting all belong in the curriculum, but weighting depends on role.
Crucially, the Commission also flags that responsible use is a core component, not an optional module. That covers data boundaries, output accountability, and the ethical limits of letting AI act on someone's behalf — skills most workforces have never been taught.
Why Most Organisations Are Not Ready
The data on workforce readiness is brutal, and it's the reason Article 4 matters.
| Metric | Figure | Source |
|---|---|---|
| Enterprises unprepared for EU AI Act obligations | 78% | Savia Learning / Vision Compliance |
| Employees who have received formal AI training | 32% | Savia Learning |
| Employees using AI tools without IT approval | 68% | Gartner via SQ Magazine |
| Companies facing compliance violations from shadow AI | ~44% | SQ Magazine |
| Enterprise leaders reporting an AI skills gap in 2026 | 59% | DataCamp 2026 State of AI Literacy |
| Organisations with a mature, workforce-wide upskilling programme | 35% | DataCamp |
| Organisations with mature programmes reporting strong AI ROI | 42% (vs 21% average) | DataCamp |
The combined picture: the majority of European workforces are already using AI that IT doesn't fully see, without any formal understanding of how it works, when to question its outputs, or where the legal lines are. That is exactly the condition Article 4 is drafted to prevent — and it's also the condition that makes AI-related breaches elsewhere (data leaks, discriminatory decisions, hallucinated advice) far more likely.
Since February 2025, if an untrained employee causes harm while using an AI system — leaks client data, makes a discriminatory decision — the organisation is already on the hook. Enforcement in August 2026 just gives regulators the formal authority to act on it.
How to Build an Article 4 Programme That Holds Up
A defensible Article 4 programme is role-tiered, continuously updated, and documented. The European Commission's Living Repository of AI Literacy Practices is the reference library. Daily microlearning hits the bar more reliably than annual training: 6–10 minute sessions achieve 80% completion rates versus 20% for traditional courses.
The Commission has not mandated a training format. Instead, it has published a Living Repository of AI Literacy Practices — a continuously updated list of the programmes organisations are actually running. The pattern across them is consistent.
Five things separate a defensible Article 4 programme from a box-tick:
- Inventory your AI exposure, including shadow AI. You cannot train for AI you do not know staff are using. Run a use survey, map approved and unapproved tools, and use the results to shape role tiers.
- Tier content by role. Executives, managers, front-line users, technical teams, and procurement need different depth. A single 30-minute module is not "proportionate" in the sense Article 4 requires.
- Anchor in responsible use, not prompt hacks. The Commission specifically flags responsible use, risk awareness, and judgment. A curriculum that is 90% "write better prompts" misses the compliance point.
- Make it continuous. AI tools change monthly. A programme built on annual training is stale before enforcement starts. Build a rhythm of short, recurring sessions instead.
- Document everything. There is no obligation to test staff, but there is a strong expectation you record what training was delivered, to whom, when, and on what topic. Regulators will ask.
Research on how adults actually retain knowledge is unambiguous about pace: daily microlearning of 6–10 minutes achieves ~80% completion rates versus 20% for traditional workshops, and role-specific content drives roughly 40% better retention than generic modules. Those numbers aren't compliance fluff — they are what the difference between a programme that exists on paper and a programme that would survive a regulator audit looks like.
This is also where the business case starts to stack up. Organisations with mature, workforce-wide AI upskilling programmes are nearly twice as likely to report strong AI ROI (42% vs 21%). Article 4 compliance and AI value creation are the same problem with the same answer: build literacy into the rhythm of work.
What Changes on 2 August 2026
From 2 August 2026, national market surveillance authorities can formally enforce Article 4. There is no specific stand-alone fine for a pure literacy breach, but regulators have indicated that AI literacy gaps will be treated as an aggravating factor when fining other AI Act violations — effectively amplifying every other compliance risk you carry.
The enforcement mechanics matter. As GDPR Register notes, Article 4 sits in the general infringement tier with a headline ceiling of up to €7.5 million or 1% of global annual turnover. But the more practical risk is multiplicative: if your organisation faces a separate AI Act breach — a badly governed high-risk system, a transparency failure, a banned practice — the absence of an Article 4 programme becomes evidence of systemic non-compliance and pushes penalties up.
In parallel, national regulators are already signalling that they will assess AI literacy as part of wider investigations. Germany's BNetzA and France's CNIL have both flagged literacy as an audit focus. By late 2026, AI literacy records will sit alongside DPIAs and GDPR training logs as basic "can you show your work?" evidence.
How kju Helps You Meet Article 4
kju is built to be the operating system for daily AI fluency at work — exactly the model Article 4 pushes organisations toward. Six-minute daily sessions, role-specific tracks, industry context, and an admin surface for documenting delivery across teams.
If you are mapping an Article 4 programme now, two places to start are our guide to what AI fluency really means in 2026 and our enterprise page for how teams deploy kju at scale. The why AI training fails deep-dive covers why annual workshops won't stand up to Article 4's "continuous, proportionate" bar.
The Article 4 enforcement window is less than four months away. The programmes that survive it will not be the ones spun up in July — they will be the ones already embedded in the rhythm of work.
Frequently Asked Questions
- What is Article 4 of the EU AI Act?
- Article 4 is the AI literacy obligation in the EU AI Act. It requires providers and deployers of AI systems to ensure a 'sufficient level of AI literacy' among staff and anyone else operating AI on their behalf. It entered into force on 2 February 2025 and becomes enforceable by national market surveillance authorities from 2 August 2026.
- Who has to comply with the AI literacy requirement?
- Any organisation that provides or deploys an AI system in the EU — including chatbots, copilots, generative AI tools, and automated decisioning — has to comply. Article 4 is sector-agnostic, size-agnostic, and applies regardless of risk class. It also reaches non-EU companies whose AI outputs are used inside the EU, and covers contractors and vendors acting on your behalf.
- What counts as 'sufficient' AI literacy under Article 4?
- The European Commission defines AI literacy as the 'skills, knowledge and understanding' that let staff deploy AI systems informedly and recognise their opportunities, risks, and possible harms. 'Sufficient' is proportional: executives need regulatory and governance fluency, front-line users need practical judgment and responsible-use skills, and technical teams need deeper knowledge of the systems they build or integrate.
- What are the penalties for non-compliance with Article 4?
- Article 4 sits in the general infringement tier with penalties of up to €7.5 million or 1% of global annual turnover, whichever is higher. Regulators have also signalled that AI literacy gaps will be treated as an aggravating factor when calculating fines for more serious breaches elsewhere in the AI Act, making it a compliance multiplier rather than an isolated line item.
- Is ChatGPT or Microsoft Copilot covered by Article 4?
- Yes. Article 4 applies to all AI systems regardless of risk class, so everyday tools like ChatGPT, Microsoft Copilot, Gemini, Claude, and in-product generative assistants fall inside the obligation. If your staff use them for work, you are a deployer and must ensure they have the literacy to use them safely and effectively.
- How should companies train for AI literacy under Article 4?
- The EU AI Office does not mandate a format, but guidance points to role-specific, context-aware, continuously updated training with documented records. Daily microlearning outperforms one-off workshops: short sessions build habit and retention, match the pace of AI tool changes, and scale across a whole workforce. The Commission's Living Repository of AI Literacy Practices is the reference library for what 'good' looks like.
