Enterprise AI

Shadow AI at Work: Why 78% of AI Users Bring Their Own Tools

Shadow AI — the use of unsanctioned AI tools at work — adds $670K to the average data breach and affects four in five workplaces. Here's why banning it fails and what actually works.

kju Team

kju Team

AI Education Experts

5 min read
Knowledge worker at a laptop in a softly lit office, the screen glowing with a warm purple-pink light suggesting AI tools in use

Three out of four knowledge workers are already using AI at work. Most of them are doing it without telling you.

That's the finding from a string of 2025 and 2026 studies on shadow AI — the use of AI tools at work without IT or security approval. Microsoft and LinkedIn's Work Trend Index found 78% of workplace AI users bring their own tools rather than wait for sanctioned software. A BlackFog survey found 60% of employees would use unsanctioned AI if it helped them meet a deadline.

Meanwhile, IBM's 2025 Cost of a Data Breach report found shadow AI is now one of the most expensive categories of enterprise security risk — adding $670,000 to the cost of an average breach.

The usual response is to panic and block. That's a mistake.

What Shadow AI Actually Means

Shadow AI is the use of AI tools at work without IT or security approval — usually employees pasting work into public chatbots or installing AI browser extensions. It's the AI version of shadow IT, driven by the same pressures: approved tools are too slow, too limited, or don't exist yet.

Think of shadow AI as a spectrum. On one end, a legal associate pastes a draft contract into ChatGPT to summarise it. In the middle, a marketing analyst feeds a quarter's worth of customer survey data into an AI research tool no one on the security team has heard of. On the other end, someone uploads source code or proprietary financial models into a consumer account to generate a slide deck.

All three are shadow AI. Only one makes the news. All three are happening in your company right now.

Why Shadow AI Is Exploding in 2026

Shadow AI is growing because three forces are pushing in the same direction: capable consumer tools, frustrated employees, and slow-moving IT procurement.

The Microsoft and LinkedIn Work Trend Index — the largest global study of AI at work — found that 75% of knowledge workers already use generative AI on the job, and nearly half of them started in the previous six months. Of those users, 78% bring their own AI tools. In small and mid-sized companies the number climbs to 80%. Every age cohort shows high BYOAI rates, from 73% for boomers to 85% for Gen Z.

MetricFigureSource
Knowledge workers using AI at work75%Microsoft / LinkedIn 2024
AI users bringing their own tools (BYOAI)78%Microsoft / LinkedIn 2024
BYOAI rate at small and mid-sized companies80%Microsoft / LinkedIn 2024
AI users who hesitate to disclose AI use on critical tasks52%Microsoft / LinkedIn 2024
Employees who'd use shadow AI to hit a deadline60%BlackFog 2025
Workers aware of their company's AI policy18.5%Survey of 12,000+ workers, 2025
Organisations with shadow AI detection or management policies37%IBM 2025

The policy gap is staggering. In a survey of 12,000+ white-collar workers, only 18.5% were aware of any official AI policy at their company — even where a policy formally existed. Half of AI users hesitate to tell their managers they used AI for important work, worried it makes them look replaceable.

The average employee is more likely to use AI at work than they are to know their company has rules about it. That's not a policy problem — it's a communication and training problem.

What Shadow AI Is Actually Costing You

Shadow AI costs money in two ways you already know about and one you probably don't. The known costs are data leaks and compliance exposure. The hidden cost is quality.

IBM's Cost of a Data Breach 2025 report — based on interviews with 3,470 security experts at 600 breached organisations — quantified it for the first time:

  • 1 in 5 organisations experienced a breach tied to shadow AI in the past year
  • Shadow AI added an average of $670,000 to the breach cost versus organisations with low or no shadow AI
  • 65% of shadow AI breaches exposed personally identifiable information (vs. 53% for typical breaches)
  • 40% of shadow AI breaches exposed intellectual property (vs. 33% typical)
  • 97% of AI-related breaches happened at organisations that lacked proper AI access controls

The hidden cost is the one we wrote about in our piece on AI workslop: AI output that looks polished but is wrong, off-context, or unusable. When employees use shadow AI without training, they can't tell good output from bad. They ship it, and the next person in the chain pays the tax. That's why IBM found that only 37% of organisations have policies to manage AI or detect shadow AI — and why the ones that don't are paying for every leak, hallucination, and compliance miss twice.

Why Banning Shadow AI Fails

The instinct, when security teams see these numbers, is to block everything. Firewall rules against ChatGPT. Endpoint policies that prohibit AI plugins. Acceptable use policies with teeth.

It doesn't work. Here's why.

First, you can't block what you can't see. IBM's research found 97% of AI-related breaches occurred where access controls were missing — meaning the security team didn't even know the AI was there. Bans push usage further into the shadows rather than eliminate it.

Second, employees have a personal phone in their pocket that runs every model on the market. When you block ChatGPT on the corporate network, you don't stop usage — you just move it to a device you have zero visibility into.

Third, the productivity pressure is real. Deloitte's 2026 State of AI in the Enterprise report found 86% of organisations are increasing AI budgets this year. Leadership is simultaneously demanding more AI output and more AI caution. Employees resolve the contradiction by doing what gets work shipped.

Shadow AI isn't a policy failure or a security failure. It's a capability gap. The fix is to close the gap, not criminalise the symptom.

How to Turn Shadow AI Into Sanctioned AI

The organisations we work with that are actually reducing shadow AI aren't winning on enforcement. They're winning on fluency. The pattern is consistent — we covered the full playbook in why AI training fails — but it collapses into four moves.

1. Approve fast, approve publicly

Every week a tool stays in "under review" is a week employees use the shadow version. Move procurement to days, not quarters, for the top 3-5 AI tools your workforce actually wants. Publish the approved list somewhere employees will see it.

2. Train people on what can and can't go in

A one-line policy ("don't paste sensitive data into AI") doesn't survive contact with a Monday morning deadline. Employees need concrete, repeatable rules: don't paste customer PII, source code, unreleased financials, or regulated data into any AI tool. Then drill it until it's muscle memory. This is what AI fluency looks like in practice — the ability to reason about AI risk in the moment, not consult a PDF.

3. Make the sanctioned path the fastest path

If Copilot with enterprise data protection takes three clicks and ChatGPT takes one, people will use ChatGPT. Reduce friction on the approved tools until they beat the shadow alternatives on speed, not just safety.

4. Measure outcomes, not policy compliance

Completion rates on AI policy training are vanity metrics. What matters is whether people use AI safely on Monday morning. Track near-misses, shadow tool signups detected at the network edge, and — if you have a learning platform — AI fluency progress across the workforce.

A 30-Day Shadow AI Response

If you're starting from zero, here's what we recommend as a starting sequence. It's what we see work at organisations moving fastest.

WeekFocusOutput
Week 1DiscoverInventory AI tools actually in use (network logs, expense reports, browser extension audits)
Week 2DecideApprove 3-5 tools publicly; explicitly disapprove 1-2 high-risk ones with alternatives
Week 3TrainRoll out role-specific AI fluency basics — what to paste, what not to, how to verify output
Week 4MeasurePublish dashboards: approved tool adoption, training completion, near-miss count

The goal isn't to eliminate shadow AI in 30 days. The goal is to move the curve from invisible and risky to visible and improving.

The Strategic Reframe

The companies that lose the shadow AI fight treat it as a security problem. The ones that win treat it as a capability problem.

Security problems get solved with policies, firewalls, and monitoring. Capability problems get solved with training, culture, and tooling. Shadow AI needs all of the above — but capability comes first, because policies only bind people who understand them.

That's the case for building AI fluency as a default organisational capability, not an optional L&D line item. When your workforce can reason about what AI can and can't do — and what they can and can't feed it — shadow AI becomes far less dangerous. Not because people stop using unsanctioned tools, but because they stop using them stupidly.

Start with six minutes a day. Multiply by your headcount. That's the fastest route from shadow AI to sanctioned AI we've seen.

Frequently Asked Questions

What is shadow AI?
Shadow AI is the use of AI tools at work without IT or security approval. It usually means employees pasting work into public chatbots like ChatGPT, Claude, or Gemini, or installing AI browser extensions and plugins on their own. It's the AI version of shadow IT — unsanctioned software adopted because the sanctioned path is too slow or too limited.
Why do employees use shadow AI?
Employees turn to shadow AI because approved tools are missing, slow, or less capable. Microsoft's 2024 Work Trend Index found 78% of AI users bring their own tools to work. The top driver is speed: 60% of employees say they will use unsanctioned AI if it helps them hit a deadline. Gen Z adoption is highest at 85%.
How much does shadow AI cost companies?
IBM's 2025 Cost of a Data Breach report found shadow AI adds an average of $670,000 to the cost of a breach. One in five organisations experienced a shadow-AI-related breach. Incidents involving shadow AI were more likely to expose personally identifiable information (65%) and intellectual property (40%) than typical breaches.
Can you ban shadow AI?
Blanket bans don't work. When a company blocks ChatGPT at the network level, employees switch to their phones. Bans remove visibility without removing the demand that created shadow AI in the first place. The more effective path is to approve useful tools, train people to use them well, and make the sanctioned option faster than the shadow one.
What is BYOAI?
BYOAI stands for 'Bring Your Own AI' — the workplace trend of employees using personal AI accounts and tools to do company work. Microsoft coined the term in their 2024 Work Trend Index after finding 78% of workplace AI users were doing it. BYOAI is the consumer-grade face of shadow AI and the fastest-growing category of unsanctioned technology in enterprises.
What should an AI policy cover?
A usable AI policy covers four things: which tools are approved and for what, what data can and can't be entered into any AI (especially customer data, source code, and regulated information), who owns AI-generated output, and how employees get training. Policies without training fail — only 18.5% of workers in a 2025 survey were aware of their company's AI policy even where one existed.