Unless you have been living under a rock - or more plausibly as a monk in a cave, you would be feeling the shifts in learning, capability, or professional standards.
Decisions about AI are being made across organisations before there is shared clarity about what responsible use actually requires.
Much of this movement is happening in operations, policy, procurement, service delivery, driven by urgency, opportunity, or the pressure to keep pace. You may not be the one accelerating adoption, but you are often the one asked to make sense of the consequences when judgement, accountability, or standards start to feel less certain.
For people responsible for learning, capability, and professional standards, this creates a tension. Ways of working begin to change faster than shared understanding. New practices take hold before questions of judgement and responsibility are fully surfaced, yet those questions inevitably land with you.
In that environment, a familiar assumption often goes unchallenged:
If people are trained, they will be ready.
It’s an understandable assumption. It has held true for many previous digital shifts.
But AI has a way of exposing its limits, not through obvious failure, but through subtle changes in how judgement is exercised over time. Researchers describe this as a shift from delegating execution to delegating judgement. A far more consequential change for accountability and professional responsibility (1)(2).
This is where a foundational misunderstanding continues to shape many AI initiatives. AI is treated as a skills upgrade, rather than as a shift in how judgement, responsibility, and professional agency operate.
This is not a technical issue.
It is a leadership and learning design issue, whether it has been named that way or not.
If you’re watching AI become part of everyday work, you may have noticed that its impact goes beyond speed or efficiency.
AI doesn’t just help people work faster, it also participates in how work gets done.
Research on human–AI interaction in the public sector and professional decision-making shows that AI systems increasingly sit upstream of decisions, shaping what options are presented, how problems are framed, and what seems “reasonable” before a human has fully weighed in (3)(8).
Delegating execution introduces efficiency risks and delegating judgement introduces responsibility risks (1).
This doesn’t mean AI shouldn’t be used. Many organisations will (and should) continue to explore where it adds value.
But it does mean we should be asking, what kind of organisation is being shaped?
Are we building AI capability, or actually reshaping how judgement operates across the people we’re responsible for supporting?
That question isn’t answered by tools or policies alone. It’s answered by what people are taught to trust, question, and take responsibility for through the learning experiences that are designed, modelled, and reinforced over time.
This equation may feel familiar:
Training delivered = capability achieved.
But when it comes to AI, this shorthand breaks down.
A large-scale review by Microsoft’s Aether team found that over-reliance on AI can lead people to perform worse than either humans or AI operating alone (6). In healthcare settings, clinicians with low AI literacy were significantly more likely to follow AI recommendations uncritically (11).
At the same time, a gap is emerging between intent and preparedness. Senior leaders consistently describe AI capability as a priority, yet far fewer report having meaningfully prepared their workforce to use AI responsibly (2)(12).
Training completion can look reassuring on a dashboard. It tells us who attended a session, but it tells us very little about what happens when time is tight, outputs sound plausible, and judgement is exercised under pressure.
Calls for “human oversight” can deepen the illusion of safety. Research on automation bias and complacency shows that oversight is most likely to degrade when systems appear reliable and workloads are high. These are precisely the environments many public-sector and regulated organisations operate in (23)(24).
AI’s most significant impact is rarely sudden breakdown. It’s gradual drift that unfolds inside systems that appear to be functioning as intended.
Studies of bias amplification and human–AI feedback loops show that when assumptions are embedded in system design, they can be reinforced over time through repeated interaction, even when no single decision appears wrong in isolation (14)(15).
As this happens, decision-making shifts slowly from active, professional judgement to passive acceptance of what the system presents (8)(10)(11).
Nothing obviously breaks as processes continue to run and outputs still look reasonable. Also, decisions can still appear defensible on the surface.
A clear illustration of this emerged in the UK’s use of an algorithm to award A-level grades in 2020. The system was designed to preserve consistency and prevent grade inflation by combining historical school performance data with teacher rankings when exams were cancelled during covid19. Human judgement was formally embedded, and the approach was initially considered fair and defensible (27).
Technically, the system worked as designed.
But once deployed at scale, it reshaped how judgement operated. Individual performance mattered less than cohort-level statistical patterns. Professional assessments were overridden in ways that were difficult to challenge. Oversight focused on maintaining model stability rather than interrogating how fairness was experienced by students (28)(29).
Nothing failed in a technical sense.
Yet the outcomes were widely recognised as misaligned with core education values of fairness, transparency, and individual recognition, leading to a rapid loss of trust and an eventual reversal of the approach (27)(28)(29).
This is often the most unsettling part. Not that something goes wrong all at once, but that it becomes harder to see when judgement has shifted, and who is still actively holding responsibility as that shift unfolds.
Learning systems are never neutral.
Even when they aren’t framed as “ethics training”, they teach people how to behave through what is rewarded, what is assessed, what leaders model, and what becomes normal over time.
Organisational learning research consistently shows that behaviour is shaped less by formal instruction and more by social modelling, incentives, and repeated experience (19)(20).
This is why ethics framed purely as compliance so often falls short. Policies can define boundaries, but they cannot teach people how to recognise when judgement is being nudged, narrowed, or quietly deferred.
As research in Behaviour & Information Technology makes clear, AI systems can produce more or less ethical outcomes depending on how they are designed, but they cannot be ethical decision-makers in their own regard (13).
That responsibility does not sit with the system. It sits with how people are prepared to work alongside it.
AI literacy is not a problem leaders can delegate away.
In complex systems, what is described as “human oversight” can easily devolve into rubber-stamping. As mentioned before, research shows this is most likely when systems appear reliable and workloads are high (23)(24).
Leadership matters not because leaders need to be closest to the technology, but because they shape the environment in which it is used.
Leaders set the standards teams work to. They are the ones to legitimise questioning, or subtly discourage it. Leaders also signal whether professional judgement is valued, or whether efficiency and consistency are rewarded at its expense.
These signals are rarely explicit. They show up in what is prioritised, what is challenged, and what is allowed to pass without discussion.
This is not about slowing innovation, but stewarding professional responsibility in systems that increasingly shape how decisions are framed, justified, and made.
Digital literacy helped people use systems. It focused on competence, consistency, and efficiency.
AI requires something different as it requires people to interrogate systems.
Research shows that increased AI use is associated with cognitive offloading and declining critical thinking over time (7)(9). At the same time, AI systems often present outputs with unwarranted confidence, and users consistently struggle to detect when that confidence is misplaced (16)(17).
For people responsible for learning, capability, and standards, this combination is concerning. It means that even capable, well-intentioned professionals can begin to defer judgement not because they lack skill, but because the system quietly changes how thinking work is distributed.
Treating AI literacy as a simple skills upgrade misses the point.
What’s being reshaped is not just task execution, but judgement, accountability, and professional agency.
This distinction is explored in more depth in an earlier piece by Oppida’s Founder and CEO, Bianca Raby: AI literacy is not just advanced digital literacy.
For organisations responsible for public trust and professional standards, adoption is the wrong starting point.
Before tools, organisations need shared language about what AI is and is not. They need explicit norms around judgement and accountability. And they need learning experiences that build discernment, not dependency.
Safety-critical industries offer a useful parallel. In aviation and healthcare, systems are designed on the assumption that humans and technology will interact under pressure. Responsibility for meeting requirements rests with system designers, not with the technology itself (16)(18).
These principles translate directly to learning design. If judgement is being shaped by AI-enabled systems, learning must be deliberately designed to protect, exercise, and strengthen that judgement rather than simply to accelerate adoption.
It is easy and often expected to ask how quickly can AI be adopted, but the more consequential question is:
How well are we helping our people judge it?
Judgement determines whether AI supports professional responsibility or undermines it. Because AI doesn’t just change what people do, it changes how they decide.
And that makes AI literacy a leadership and learning design responsibility, whether it is named that way or not.
If your organisation is grappling with AI literacy as more than a tools or compliance issue, you’re not alone. Many organisations are realising that what’s missing isn’t another platform or policy, but shared understanding, ethical judgement, and space for meaningful conversation.
Oppida’s AI Literacy for Everyone suite of modules designed for exactly this context. It supports organisations to build a common language about AI, strengthen professional judgement, and provide learning experiences that enable thoughtful, responsible use in AI-enabled environments.
Not tool training but learning design, done properly.