Insights
Implementation6 min read Audio

85% of AI Training Doesn't Stick. The Problem Isn't the Training.

2026-04-15JR Intelligence
Listen to this article
0:00 / 0:00

The average company spent north of $1M on generative AI last year. A significant slice of that went to training — workshops, subscriptions, internal champions, LinkedIn Learning licenses, maybe a dedicated AI upskilling program. The expected outcome: a workforce that uses AI tools to do more with less.

Here's what Docebo's April 2026 report found instead: 85% of employees who received AI training in the last year say they cannot apply it to their actual jobs.

Not "they haven't tried." Not "they need more practice." They literally cannot use what they learned — because the conditions required to apply it don't exist in their actual work environment.

If you're spending money on AI training without fixing this first, you're not investing in change. You're paying for expensive theater.

The Real Bottleneck Has a Name: Pre-AI Workload Debt

Before you can use AI to automate your workflows, someone has to have enough slack in their day to actually implement the automation. That sounds obvious. It almost never happens.

Here's the irony: the tasks AI should be eliminating — manual data entry, copy-paste reporting, chasing approvals, reformatting documents — are exactly the tasks consuming the time your people would need to implement AI. They're buried in the work that AI was supposed to fix. So the training sits unused. The licenses sit idle. The course completion rates look fine on the dashboard. Nothing changes.

This is pre-AI workload debt: the accumulated backlog of manual process friction that makes AI adoption structurally impossible even when the tools exist and the team is theoretically trained.

The manufacturing sector makes this concrete. Despite 70% of manufacturing firms already adopting AI tools, only 19% offer formal AI training — and even among those that do, application rates are dismal. The pattern isn't ignorance or resistance. It's capacity. Frontline workers running lean shifts don't have protected hours to experiment with new tooling. If you hand them a trained skill and a broken workflow, the skill evaporates within weeks.

The executive layer isn't exempt. A Writer/Workplace Intelligence survey found that 75% of executives admit their company's AI strategy is "more for show" than functional. Strategy theater. The right language in the board deck, the AI steering committee, the published roadmap — and underneath it, the same manual processes running at the same pace they ran three years ago.

What Google's $10M Response Gets Right — And Where It Falls Short

On April 15, Google.org announced a $10M grant to train 40,000 factory workers in AI operations. It's one of the more thoughtful large-scale training initiatives you'll see from a major tech company, and it deserves credit for one specific reason: it targets frontline workers, not just knowledge workers.

Most enterprise AI training programs flow downward from leadership and stop somewhere around middle management. The people actually doing the work — the ones who would generate measurable productivity gains — rarely see it. Google's initiative corrects that. Targeting factory floor workers directly is the right instinct.

But the same risk applies here that applies everywhere else. Training 40,000 workers to understand AI operations doesn't change what happens when they return to a shift where AI-assisted output still has to be manually reconciled with a legacy ERP that doesn't accept automated inputs. Skilled people, broken workflows. The training lands in a system that can't absorb it.

Training is necessary. It is not sufficient. You cannot train your way out of a process problem.

The Companies Actually Seeing ROI Do It in the Right Order

Only 29% of companies report significant ROI from generative AI — despite 59% spending over $1M annually on it. That's a brutal gap. But the 29% who are generating real returns share a pattern that's consistent enough to be instructive.

They fixed the process before they bought the training.

Companies that audited their existing workflows and formalized what their teams were already doing informally — the "shadow AI" users, the people who'd quietly started using ChatGPT to draft their intake summaries — saw dramatic results within 90 days: 30–45% reductions in document turnaround time, 20% faster intake processing. Not because they hired different people. Because they removed the friction that was preventing people from using tools they already had access to.

The contrast is stark. Consider a team with Microsoft 365 Copilot licenses. They attended the training. They know the prompts. Now watch what happens when a document summary generated by Copilot still has to pass through 47 manual approval steps before it reaches a decision-maker. The AI speeds up one node in a broken chain. The chain doesn't care.

ROI from AI doesn't come from faster individual tasks inside broken processes. It comes from redesigning the process so that AI sits at load-bearing points — and then training people to operate the redesigned system.

The order matters: audit the process → eliminate the friction → train on the tools.

Run it backwards and you get the 85% stat.

Three Things to Do This Week

If you're running an AI training initiative or thinking about launching one, here's the honest prioritization:

1. Map where your team actually spends manual hours. Not what the job description says. Where are they actually spending time on tasks that are structurally automatable — data entry, reformatting, status reporting, approval chasing? That map is your pre-AI workload debt inventory.

2. Fix the process before you buy the training. If a workflow has fifteen manual handoffs, adding AI to step four doesn't fix it. Identify the handoffs AI can collapse entirely, not just speed up. This is process redesign, not tool deployment.

3. Measure behavior change, not course completion. Training ROI is not "X% of employees completed the module." It's "X% of employees changed a specific behavior in a specific workflow." If you can't define the behavior change you expect, you can't measure whether training worked.


This is what an AI audit actually does: it finds the gap between your team's capabilities and the tools they already have access to. Most of the time, that gap isn't a skill problem. It's a process problem with a skill-shaped symptom.

If your AI spending isn't generating returns — or you're about to launch a training program and want to know whether the underlying workflows can absorb it — that's exactly what a Deep Dive surfaces in the first week.

The 85% failure rate isn't a verdict on AI training. It's a verdict on the order in which companies are doing things. Fix the order.

AI TrainingOperationsProcess DesignSMB StrategyImplementationROI

Ready to build?

One conversation. No pitch deck. We'll map your bottleneck and tell you honestly if AI infrastructure fits.