Your instinct is probably right if you're feeling the pressure to "do something" about AI whilst simultaneously knowing you and your team need time to actually learn this stuff properly.
My research across UK knowledge workers revealed that everyone feels the race, yet no one really knows what they're running towards. And that shows. BCG report that whilst 78% of organisations now use generative AI in at least one business function, only 26% successfully scale beyond pilots 1, while McKinsey found that while 92% of employees want to use AI effectively, only 30% report CEO direct sponsorship 2. I found a similar pattern in my research. Most concerning was the complete disconnect between leadership and the rest of the organisation. My research suggested that a key reason for lack of C-Suite support might be that business leaders often simply don't understand AI—particularly how it interacts with frontline operations—they don't get enough time to practice. Often AI strategy is outsourced to external "experts" or worse, tool suppliers who, by definition, lack the knowledge and experience of how business is actually done internally. Middle managers, HR and L&D teams are the primary enablers that can help to bridge the gap between strategy and execution. BCG's research suggests that organisations that succeed in AI pursue half as many opportunities as laggards but achieve 2x the ROI and scale 2x as many products/services 1.
Four ways of being with AI, and why some lead to capability loss
Through my research interviewing UK knowledge workers about their lived experience with AI, four distinct patterns emerged in how people relate to these systems. I call them archetypes—not personality types, but situational relationships that shift based on context, pressure, and organisational culture. Today, we'll focus on the instrumental relationships:
The Tool: AI as a colleague that enhances your capability. You maintain oversight, make strategic decisions, and use AI to handle information processing while you focus on judgment and context. The relationship is instrumental but not dependent—you're getting better at your job, not outsourcing your thinking.
The Trap: AI as a dependency that erodes your capability. What starts as efficiency gradually becomes an inability to function without the system. Skills atrophy, contextual knowledge fades, and you've traded short-term productivity for long-term vulnerability.
The Teacher and The Sparring Partner represent more relational dynamics—AI as a coach helping you develop expertise, or as a creative collaborator challenging your thinking. We'll explore these in the next post on individual learning relationships with AI.
Whether we end up using AI as an empowering tool or fall into the trap isn't an individual choice. It's largely determined by how your organisation designs AI adoption.
By now, you might have seen dozens of posts mentioning the MIT research "Your Brain on ChatGPT"3. The researchers tracked brain activity in students writing essays across four months and found that ChatGPT users showed the lowest cognitive engagement across all measures compared to those who didn't use AI. ChatGPT users relied more and more on AI, couldn't remember what they wrote about, felt reduced ownership of their work, and physically had less brain activity and formed fewer neural connections. Lead researcher Nataliya Kosmyna warns: "There is no cognitive credit card. You cannot pay this debt off."
The skills atrophy research extends across professions. A study of 469 mathematics teachers found AI dependency explained 91.4% of the variance in problem-solving ability, 93.4% in critical thinking, and 89.0% in creative thinking4. Research with 666 professional workers found that frequent AI usage correlates negatively with critical thinking, with cognitive offloading mediating this relationship5. The most critical finding may be that AI literacy has a positive relationship with AI dependency (β = 0.505). Counterintuitively, increasing AI literacy through traditional training increases dependency rather than reducing it4.
The 70-20-10 principle: why AI adoption mirrors leadership development from 40 years ago
AI adoption, like leadership development, is fundamentally about organisational learning. Back in the 1980s, the Center for Creative Leadership researched how executives actually learn and grow 6. Their findings are now a rule of thumb: 70% of learning comes from challenging experiences and assignments, 20% from developmental relationships, 10% from coursework and training. Whilst exact ratios vary across studies, the principle holds: Structured experiential learning and social learning are key to learning transfer; formal training alone is extremely ineffective7.
Fast forward to 2024, and BCG's research on successful AI adoption finds that organisations leading in AI adoption allocate resources the same way. 70% to people and processes, 20% to technology and data, 10% to algorithms 1. Laggards invert this—obsessing over models whilst neglecting change management—and pay the price in failed deployments.
McKinsey's 2025 research on AI and organisational learning found that experiential and social approaches achieve 65% skill retention, compared with 10% for formal training alone8. Yet most organisations still default to vendor certifications and classroom instruction whilst wondering why their £500k AI training programme yields minimal behaviour change. The organisations achieving 2x ROI aren't buying fancier algorithms—they're creating the conditions for teams to learn by doing.
Your team already knows this intuitively. When they say they need time to practice, space to discuss, and permission to learn together—that's not resistance to change, it's hunger to grow.
Why your middle managers matter more than your CIO
When managers show high agency and optimism about AI, their direct reports are nearly 3x more likely to develop it themselves. BetterUp's 2024 research tracking 12,000 workers calls this the manager multiplier effect 9.
Managers who have what BetterUp calls a pilot mindset (high agency, optimistic) are 3.6x more productive than passengers (low agency, pessimistic) and 3.1x more likely to stay at their organisation 9. This helps explain the apparent contradiction between BCG and McKinsey's research. BCG suggests leaders are well ahead of employees in AI adoption 1. McKinsey's data shows 92% of employees want to use AI effectively, whilst only 30% report CEO sponsorship 2. My research suggests the disconnect isn't about readiness—it's about which leaders we're measuring.
C-suite executives setting AI strategy are often enthusiastic but lack hands-on AI experience. Middle managers, on the other hand, are often under extreme workload, under-supported, and caught between executive expectations and frontline realities. Meanwhile, frontline workers are often already experimenting with AI in their own time, and are actually quite ready. Provided someone creates the conditions for learning.
McKinsey found that millennials report 62% high AI expertise versus 22% of boomers 2. They bring native digital fluency. Enabling millennial managers to lean into shared learning—creating space for teams to experiment safely, discussing failures openly, documenting insights, and celebrating capability development—allows not only them to grow as leaders, but also grows capabilities across the team and organisation.
Often the Organisations that develop uniquely human management capabilities—coaching over controlling, enabling over monitoring—see 34% better team performance, 21% more innovation, and 15% higher productivity 10. AI adoption strategy needs to invest in middle management capability development and employee voice before it invests in another enterprise license or vendor partnership.
Communities of practice: where the 20% happens that makes the 70% work
Communities of practice are the 20% that training budgets consistently underinvest in: creating spaces for people to learn from each other whilst solving real problems. Not "lunch and learn" sessions. Not SharePoint repositories of best practices. Actual communities where people work together on challenges, discuss what works and what doesn't, and build shared understanding through practice.
The US federal government's AI Community of Practice includes over 12,000 members from 100+ agencies. They provide monthly training, specialised working groups, and applied challenges. The 2024 healthcare AI challenge received 140+ applications, with teams learning through real problem-solving rather than abstract instruction 11. Similar CoPs have been successfully set up in academia 12 and in commercial organisations.
What makes communities of practice effective is the contextualised social learning. University of Ottawa researchers found that undergraduate students with no programming experience created functional AI projects through experiential learning in authentic contexts 13. The key was situating problems in real-world stakeholder scenarios: students working on actual cases, such as financial intelligence officers matching names across bank reports, developed both technical skills and crucial judgment about when and how to apply AI.
In my coaching practice, I've found simple interventions create outsized impact. Team leaders are navigating the pressure to adopt AI while dealing with understaffed teams and operational challenges. Coaching gives them a place to say "I don't know" and leave with clarity and actionable AI homework. And simple weekly experimentation challenges and dedicated space for regular reflection sessions led teams to become the unofficial AI champions in the whole organisation. All it took was documented, shared experimentation, where failures became learning opportunities rather than performance issues.
What gets encapsulated: the tacit knowledge that AI can't see
As AI systems learn and expand their functions, something subtle happens. Work processes that once required human coordination, contextual judgment, and accumulated expertise gradually become invisible inside automated systems. Researchers call this encapsulation—the progressive expansion of the "black box" around the relations and functions AI performs.
Here's why this matters for managers: the expertise your team has built over years doesn't always look like expertise. Seasoned workers achieve results more efficiently than standard operating procedures suggest through deeply embedded tacit knowledge. They know which customers need a courtesy call before the reminder letter. They spot the subtle pattern that flags a claim for review. They understand the context that makes this month's numbers meaningful.
When you automate processes without understanding these invisible practices, three things happen:
Edge cases become disasters. Sometimes bugs are funny. Try asking ChatGPT 5 if there is a seahorse emoji. But if these edge cases and flukes happen in a business-critical process, it can become disastrous. Chevrolet almost sold cars for $1 14. Microsoft's NYC MyCity chatbot advised entrepreneurs that they could legally take workers' tips 15. Air Canada paid damages after its virtual assistant gave incorrect bereavement fare information 16.
Invisible work becomes visible problems. Hospital research on logistics robots revealed substantial coordination work that was never accounted for: staff clearing pathways, adjusting schedules, and providing manual assistance. This invisible work didn't disappear when robots arrived. It intensified, performed by people whose roles hadn't been redesigned to accommodate it 17.
Skills atrophy without anyone noticing. A Polish study found endoscopists' adenoma detection rate decreased from 28.4% to 22.4% after exposure to routine AI-assisted systems 18. Accountants' reliance on automation rendered them unable to function independently when systems failed 19. Legal profession leaders worry about generating future professionals when junior work is automated, effectively eliminating the training ground for developing expertise 20.
This doesn't have to be a one-way street toward capability loss. It can be empowering and capability-building if we make the invisible visible and focus on organisational learning.
Psychological safety enables everything else
My research suggests that organisations with high organisational silence—where people don't feel safe raising concerns or admitting uncertainty—struggle with AI adoption regardless of investment. The silence manifests in several ways: people don't disclose when AI makes errors, they don't ask questions that might reveal knowledge gaps, and they don't share failed experiments that could help others avoid mistakes. Rules without trust create shadow IT—people hiding AI use, working around restrictions, or giving up entirely.
The World Economic Forum's 2024 governance playbook emphasises enabling frameworks over compliance structures. Organisations implementing enabling governance—embedded in development lifecycles, distributed accountability, adaptive rather than rigid—scale 2.6x more successfully than those with traditional compliance approaches 21.
Before you write AI usage policies, before you implement monitoring systems, before you roll out vendor contracts, invest in building the team climate where people feel safe learning in public 22. This means:
Creating explicit permission to experiment. Not just saying "innovation is important" but actually protecting time and resources for structured experimentation with clear learning objectives rather than success metrics.
Normalising failure as information. When something doesn't work, the question isn't "who's accountable?" but "what did we learn?" Document insights, share them widely, and celebrate the learning even when the outcome disappoints.
Modelling uncertainty from leadership. When managers admit what they don't know, ask questions without having answers, and visibly learn alongside their teams, it creates permission for everyone else to do the same.
This is why governance initiatives that skip the psychological safety work consistently fail—they're building on sand. The governance conversation matters, particularly in regulated industries like insurance and legal services. But governance that enables rather than constrains requires the psychological safety foundation first. You can't policy your way to a learning culture.
Breaking the siloes: AI-augmented organisations are multi-dimensional
MIT's 2024 Enterprise AI Maturity Model tracked organisations across four stages 23.
- Stage 1 (28% of enterprises): preparing and experimenting with below industry average profit.
- Stage 2 (≈25%): active piloting with a formal strategy, but still below average financially.
- Stage 3 (≈30%): operational maturity through coordinated implementation, finally seeing above-average returns.
- Stage 4 (≈17%): transformational status with AI embedded in business models, achieving 1.5x higher revenue growth, 1.6x greater shareholder returns, and 1.4x higher return on invested capital.
Organisations that successfully moved through the stages paid coordinated attention to four core dimensions:
Technology dimension: Architecture for reusability rather than use-case-specific solutions.
People dimension: Role redesign, career pathways, cultural transformation, communities of practice—not just training.
Governance dimension: Embedding responsible AI into development lifecycles rather than creating approval bottlenecks 24.
Measurement dimension: Tracking leading indicators, not just lagging outcomes. Less than 1 in 5 organisations systematically track well-defined KPIs for GenAI solutions, yet this has the most impact on the bottom line 25.
Organisational silos become a key limiting factor. Technology teams optimise algorithms, L&D runs training programmes, risk functions write governance policies, and finance tracks ROI. Nobody's coordinating across all four dimensions, so you get technically sound solutions that people don't adopt, governance that constrains rather than enables, and measurement that misses what actually matters.
The collaborative approach that creates Tool rather than Trap requires breaking these silos. Not through reorganisation, but through creating forums where technology, people, governance, and measurement conversations happen together. Start with one monthly cross-functional AI capability review where technology, L&D, risk, and finance discuss progress on the same use cases.
What to do now?
So what do you actually do with this?
If you're an executive:
Audit your AI spending against the 70-20-10 principle. If more than 30% of your budget goes to technology and algorithms, you've inverted the resource allocation that successful organisations use. Redirect investment from vendor contracts and model optimisation toward manager capability development, community of practice infrastructure, and workflow redesign support.
Review your governance approach. If it functions as an approval bottleneck rather than advisory support, you're constraining adoption. Enable experimentation within clear boundaries rather than requiring permission for every use case.
If you're in L&D or HR:
Identify three millennial managers (ages 35-44) who show high AI readiness. Invest in developing them as internal champions—not through generic training but through scaffolded leadership development that emphasises coaching over controlling, enabling over monitoring. Their impact will cascade at 3x through the manager multiplier effect.
Launch a small community of practice focused on a real workflow challenge. Not a lunch-and-learn. Not a SharePoint site. An actual working group that meets regularly, experiments with AI on authentic problems, documents what works and doesn't, and builds shared capability through practice. Start with 5-10 people, commit to 8 weeks, measure learning, not just productivity.
Stop buying generic AI training. Redirect that budget toward manager coaching, protected experimentation time, and documentation infrastructure.
If you're a manager:
Create explicit permission for your team to experiment. This doesn't mean "innovation time" where people work on pet projects. It means structured experimentation on real work challenges with clear learning objectives. Protect time weekly for people to try AI approaches, discuss what worked, and document insights.
Model uncertainty yourself. The next time you encounter an AI-related decision you're unsure about, say so publicly. Ask your team what they think. Learn alongside them. This creates the psychological safety that enables everything else.
Establish "think first" practices. Before people turn to AI for drafting, analysis, or problem-solving, they practice the skill independently. This prevents the skills-atrophy cascade and, paradoxically, improves AI outputs—people who think through problems independently write more effective prompts.
If you're not sure: Give me a call. I'd love to talk about how we might empower your organisation on its path to AI adoption.
Next article: measuring what actually matters
You can't build capability unless you're measuring it. But most AI metrics track efficiency gains or usage numbers. In the next article, we'll explore what happens when we measure what actually matters: not just faster outputs, but whether people maintain independent problem-solving capability. Not just reduced task time, but whether teams develop better judgment. Not just cost savings, but whether organisations build sustainable competitive advantage through human-AI collaboration.
The organisations winning at AI aren't measuring productivity. They're measuring capability. And that changes everything about how you design adoption.
This is the fourth post in a six-part series exploring AI's transformation from hype to practical implementation. Next week: "Measuring What Matters: Beyond Efficiency Metrics."
Footnotes
-
Boston Consulting Group. (2024). AI adoption in 2024: 74% of companies struggle to achieve and scale value. https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value ↩ ↩2 ↩3 ↩4
-
McKinsey & Company. (2025). AI in the workplace: A report for 2025. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work ↩ ↩2 ↩3
-
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872. https://arxiv.org/abs/2506.08872 ↩
-
Zhang, Y., et al. (2025). Exploring the relationship between AI literacy, AI trust, AI dependency, and 21st century skills in preservice mathematics teachers. Scientific Reports, 15, 99127. https://doi.org/10.1038/s41598-025-99127-0 ↩ ↩2
-
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Journal of Intelligence, 15(1), 6. https://doi.org/10.3390/jintelligence15010006 ↩
-
McCall, M., Lombardo, M., & Morrison, A. (1988). The Lessons of Experience: How Successful Executives Develop on the Job. Lexington Books. ↩
-
Johnson, S. J., Blackman, D. A., & Buick, F. (2018). The 70:20:10 framework and the transfer of learning. Human Resource Development Quarterly, 29(4), 383–402. https://doi.org/10.1002/hrdq.21330 ↩
-
McKinsey & Company. (2025). The learning organization: How to accelerate AI adoption. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/the-learning-organization-how-to-accelerate-ai-adoption ↩
-
BetterUp. (2024). Why leadership is your secret weapon for AI adoption. https://www.betterup.com/blog/leadership-secret-weapon-ai-adoption ↩ ↩2
-
Intel. (2024). Successful AI adoption: Managers as change leaders. https://www.intel.com/content/www/us/en/it-management/intel-it-best-practices/ai-for-managers.html ↩
-
U.S. General Services Administration. (2024). Artificial intelligence community of practice. https://www.gsa.gov/technology/government-it-initiatives/artificial-intelligence/ai-community-of-practice ↩
-
Columbia University. (2024). AI: Community of practice. https://etc.cuit.columbia.edu/AI-community-of-practice ↩
-
Telfer School of Management. (2025). AI meets experiential learning: Turning the impossible into reality. https://telfer.uottawa.ca/en/telfer-knowledge-hub/experiential-learning/ai-meets-experiential-learning/ ↩
-
Car Dealership Disturbed When Its AI Is Caught Offering Chevys for $1 Each ↩
-
Lecher, C., Honan, K., & Puertas, M. (2024, April 2). Malfunctioning NYC AI chatbot still active despite widespread evidence it's encouraging illegal behavior. The Markup. https://themarkup.org/news/2024/04/02/malfunctioning-nyc-ai-chatbot-still-active-despite-widespread-evidence-its-encouraging-illegal-behavior ↩
-
Proctor, J. (2024, February 16). Air Canada found liable for chatbot's bad advice on bereavement rates. CBC News. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416 ↩
-
Tornbjerg, K., Kanstrup, A. M., Skov, M. B., & Rehm, M. (2021). Investigating human-robot cooperation in a hospital environment: Scrutinising visions and actual realisation of mobile robots in service work. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (pp. 381–391). Association for Computing Machinery. https://doi.org/10.1145/3461778.3462101 ↩
-
Budzyń, K., Romańczyk, M., Kitala, D., Kołodziej, P., Bugajski, M., Adami, H. O., Blom, J., Buszkiewicz, M., Halvorsen, N., Hassan, C., Romańczyk, T., Holme, Ø., Jarus, K., Fielding, S., Kunar, M., Pellise, M., Pilonis, N., Kamiński, M. F., Kalager, M., Bretthauer, M., … Mori, Y. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. The lancet. Gastroenterology & hepatology, 10(10), 896–903. https://doi.org/10.1016/S2468-1253(25)00133-5 ↩
-
Rinta-Kahila, Tapani; Penttinen, Esko; Salovaara, Antti; Soliman, Wael; and Ruissalo, Joona (2023) "The Vicious Circles of Skill Erosion: A Case Study of Cognitive Automation," Journal of the Association for Information Systems, 24(5), 1378-1412.
DOI: 10.17705/1jais.00829
Available at: https://aisel.aisnet.org/jais/vol24/iss5/2 ↩ -
AI: Everything will change – including training | Law Gazette ↩
-
World Economic Forum. (2024). Research finds 9 essential plays to govern AI responsibly. https://www.weforum.org/stories/2025/09/responsible-ai-governance-innovations/ ↩
-
Prosci. (2024). AI adoption: Driving change with a people-first approach. https://www.prosci.com/blog/ai-adoption ↩
-
MIT Sloan. (2024). What's your company's AI maturity level? https://mitsloan.mit.edu/ideas-made-to-matter/whats-your-companys-ai-maturity-level ↩
-
AIMultiple. (2024). AI center of excellence (AI CoE): Meaning & setup. https://research.aimultiple.com/ai-center-of-excellence/ ↩
-
McKinsey & Company. (2024). Moving past gen AI's honeymoon phase: Seven hard truths for CIOs to get from pilot to scale. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/moving-past-gen-ais-honeymoon-phase-seven-hard-truths-for-cios-to-get-from-pilot-to-scale ↩