The world of work isn’t just changing—it’s in full transformation mode. By 2030, we’re looking at...
AI in L&D has crossed the experimentation threshold. So now what?

Something significant is happening in Learning & Development, perhaps on the scale of when online learning first proved it could be truly scalable.
That is that AI is no longer emerging. It’s embedded.
A late-2025 global study, AI in Learning & Development Report 2026, authored by Dr. Philippa Hardman in collaboration with Synthesia, surveyed 421 Learning & Development professionals, confirming what many teams already feel:
- 87% are comfortable using AI
- 57% are actively using it in learning programs
- Another 30% are piloting
- Only 1% report not using AI at all
This is not cautious experimentation, but an operational reality.
AI is now part of the production engine: scripting, quiz generation, video creation, translation, research summarisation, knowledge search. More than 65% of respondents routinely use AI in the Design and Develop stages of ADDIE.
Speed is the dominant incentive. 84% cite it as the primary reason for adoption.
AI has become infrastructure. But infrastructure maturity and organisational maturity are not the same thing.
Widespread adoption, uneven depth
Dr. Philippa Hardman, learning scientist and lead analyst behind the AI in Learning & Development Report 2026, captures the moment precisely:
“Maturity is rising — but it’s far from uniform. Many teams are still early-stage; a small group is sprinting ahead.”
This is the pattern across the data.
Usage is nearly universal, yet depth of maturity is not.
Only 9% report scaling AI across the organisation.
Only 6% describe themselves as “AI-first.”
Most sit somewhere in the middle, actively using AI in specific workflows without fully integrating it into strategy, governance, or evaluation.
That distinction matters because once AI becomes embedded in daily workflows, the risk shifts.
The question is no longer about how we are using AI, it's about whether we are using it well.
The real shift isn’t production. It’s decision influence.
The strongest growth areas aren’t in video or quiz generation anymore.
They’re in:
- Adaptive assessments and simulations
- Personalised learning pathways
- Skills mapping and career pathing
- AI tutors and coaching systems
- In-flow performance support
AI is moving downstream, from asset generation into adaptive systems.
And that’s a fundamental shift.
When AI generates a script, the risk is primarily about quality.
When AI recommends a pathway, clusters skill gaps, or surfaces performance guidance in real time, the risk becomes strategic and organisational.
Now AI is influencing:
- What learning is prioritised
- How performance gaps are interpreted
- What gets recommended to whom
- How effectiveness is measured
This is no longer about faster content.
It’s about distributed judgement.
The value narrative is expanding faster than capability
Today, AI’s value in L&D is:
- 88% report time saved
- 45% report cost savings
- 41% report business impact
- 40% report improved engagement
But expectations for the next two years are far more ambitious:
- 72% expect more personalised learning
- 65% expect wider internal reach
- 55% expect clearer business impact
- 54% expect easier localisation
The centre of gravity is shifting from production efficiency to ecosystem intelligence.
As Kevin Alster, Strategic Advisor at Synthesia and contributor to the AI in Learning & Development Report 2026, puts it:
“Right now, AI’s value in L&D is speed: faster production, higher-quality assets, and sharper learner experiences. A smaller group sees a future that will be defined by something else: personalised, in-the-moment learning that adapts to context and need rather than following a single standard.”
And that raises the bar significantly.
Because personalisation requires:
- Data interpretation
- Algorithmic recommendations
- Pattern clustering
- Automated decision pathways
If teams don’t share a clear understanding of how those systems work, their limitations, their biases, their failure modes, then scaling them responsibly becomes difficult.
Governance is not keeping pace
The report reveals something important.
Security concerns top the list of “blockers” (58%), followed by accuracy concerns (52%), integration challenges (46%), and lack of internal expertise (46%).
Culturally, experimentation is encouraged (74%).
Operationally, support is uneven. Only 45% feel IT actively enables adoption.
Most teams avoid using personal or sensitive learner data with AI (59%). Among those who do, nearly one in five say approval processes are unclear.
This is a tension.
Personalisation ambitions require deeper data use and governance maturity is still catching up.
As AI moves into adaptive pathways and skills intelligence, unclear accountability becomes more consequential. Because, you can’t scale personalisation without clarity around:
- Data responsibility
- Oversight mechanisms
- Evaluation standards
- Explainability
And those aren’t tool problems, they are capability problems.
Tool stacks are expanding but conceptual clarity isn’t guaranteed.
AI no longer lives in one place.
The LMS may remain central, or it may not. Only 47% believe it will still be the backbone in three years.
AI may sit:
- Inside LMS/LXP platforms
- Embedded in productivity tools
- As standalone AI platforms
- As cross-system agentic layers
27% say they don’t know where it will live.
Teams are assembling multi-tool, multi-model ecosystems, blending general-purpose AI, L&D-specific tools, and internal copilots.
That introduces new complexity:
- Which model generated this output?
- What data informed this recommendation?
- Where does accountability sit across systems?
- How do we audit decisions in multi-model environments?
Tool integration is accelerating and conceptual integration is not automatically keeping pace.
Agentic AI changes the human role
Interest in agentic AI is high:
- 27% actively exploring
- 39% interested but cautious
- Nearly half exploring AI tutors
Agentic systems don’t just generate outputs, They take initiative, sequence actions and respond dynamically.
This changes the human role from creator to supervisor and so the question we now should be asking is; are our teams equipped to supervise intelligent systems, not just edit their outputs?
That requires a deeper layer of capability.
The divide that’s coming
67% of respondents want AI skills and design training.
63% want help measuring impact.
44% want responsible AI guidance.
50% want support with integration.
So this isn’t about enthusiasm anymore, it's more about readiness.
Dr. Hardman puts it bluntly:
“Over the next 12–24 months, I expect to see a sharper divide between teams who use AI to go faster, and teams who use AI to build smarter, more personalised, more evidence-based learning ecosystems.”
That divide won’t be defined by who has the most tools, it will be defined by who has built shared capability.
The capability that now matters most
AI adoption is nearly universal and optimism is strong and ambition is growing.
But adoption tells us that AI is being used, not how well it’s being used.
As AI moves from generating assets to shaping learning ecosystems, the differentiator will no longer be speed. It will be the quality of professional judgement surrounding that speed.
Teams that lead in this next phase will be able to:
- Evaluate AI outputs beyond surface edits
- Understand how recommendations and pathways are generated
- Maintain accountability across interconnected systems
- Align AI use with learning intent, not convenience
- Scale practice without diluting standards
Kristen Budd, Chief Learning Officer and contributor to the AI in Learning & Development Report 2026, puts it plainly:
“You can automate production, but you can’t outsource cognition.”
That is the inflection point.
AI is now infrastructure in L&D. The question is no longer whether teams can operate the tools. It’s whether they can supervise them; challenge them, justify them, and take responsibility for the outcomes they influence.
This is the capability gap the report surfaces. Not a lack of platforms or enthusiasm, but a lack of shared literacy.
As AI becomes embedded across workflows, skills systems, adaptive pathways and agentic layers, organisations need a common foundation: an understanding of how AI generates outputs, where its limits sit, and how human accountability is preserved when decisions are influenced by it.
That’s where AI literacy fits.
Not as prompt tips or feature walkthroughs, but as the shared judgement layer that allows AI to enhance learning without eroding professional standards.
If your team is already using AI (and most are), the next step isn’t acquiring more tools.
It’s strengthening the capability to govern and evaluate how those tools shape learning decisions.
Because in the next phase of L&D, speed will be assumed.
What will set teams apart is how thoughtfully they use it.
References
https://webcdn.synthesia.io/reports/AI%20in%20Learning%20and%20Development%20Report%202026.pdf