We used to walk into a shop and find somebody knowledgeable who could help us. Whether finding the perfect item or solving a specific problem, the person who knew how to do it was there.

Now we have a website with a customer service chat box. First, we pick the category, then the product, then the category of questions. The question we actually have isn’t there because we’ve lost ourselves in the menu of options. Then maybe we get to free text chat where we can finally type something. We furiously type everything we’ve been holding onto, only for the chat box to say, “Sorry, I don’t understand. I can’t help you with that.”

If we’re lucky, we get an outsourced customer service agent trying to solve our problem by accessing a knowledge base and learning how to use the apps we’re apparently trying to use.

We’ve indirectly employed so many different people—those running the websites, building the apps, making the hardware, and the outsourced agents. But we lost access to efficiently sharing information and having conversations about what we’re actually trying to solve.

This Pattern Is Everywhere

In my MSc we read this article that beautifully illustrates how concentrating on what’s easy to measure can go wrong. (I can’t find the article again, even after hours of searching). It told this simple story: The NHS tried to reduce waiting lists by shortening appointment times from about 30 minutes to 20 minutes. It takes about 18 minutes to get through the intake questionnaire. Elderly people with simple problems would come in, get taken through the questionnaire, then have 2 minutes for the actual consultation. They never actually got treated. They just got processed and sent away, maybe given paracetamol. When the nurse practitioners were given more time, they could solve problems on the first appointment. Patients didn’t have to come back again, which ultimately reduced the wait list and made it more efficient.

We have not designed processes to make the most of humans. We’ve designed them to optimise measurable metrics. The focus is on individual measurable parts of the process. There’s no focus on the outcome of the work anymore. Individual workers get measured on how long it takes them to answer a ticket, how many tickets they can answer in a day. Not on how many problems have actually been solved.

Why This Keeps Happening: The Three-Way Disconnect

I’m halfway through analysing data for my dissertation in organisational psychology, and some clear patterns are starting to emerge about why we keep falling into these implementation traps. There’s a disconnect between AI evangelists and AI critics that’s preventing informed discussions we desperately need.

The evangelists are often external people—LinkedIn specialists screaming about AI. Every participant I interviewed mentioned how toxic LinkedIn has become because there’s no critical discussion. It’s just people pushing services, often without nuance. Business leaders often fall into the same category. They don’t have time to be involved in day-to-day work, so they rely on experts for advice. If those experts are solution pushers, business leaders take those solutions at face value because that’s the only information they have access to.

Evangelists often lack understanding of how LLMs work, what current implementations can do, how long implementation takes, and what that might mean for outcomes—both for the business and the people it serves.

Critics often completely disengage. They also lack nuanced understanding to articulate why they don’t like it. They tend to jump at any sign of AI weakness. Any hallucination, any wrong quote becomes proof that everything is doomed. It’s an all-or-nothing mindset.

In the middle are the employees. I’ve spoken to mid-level and senior employees who said they feel like they’re hiding their AI use. One said, “Little do they know I’ve been using AI this whole time.” Another said, “It feels a bit like cheating.”

So while two extreme factions are fighting over who’s right without actually having grounded knowledge and approach, the people actually doing the work feel like they’re not allowed to say where and when they have used AI. There’s an element of cloaking and shame around using AI.

Because that shame is happening, we are not learning from the little failures. The amount of people that said they have their own private subscription that they use because the subscription that the company has isn’t good enough, or it doesn’t do the right thing, or they prefer the personality of a different model—the data leakage that’s happening here is significant.

We need to get to a point where we can have a discussion that is informed about how to embed these things into our processes. As long as there’s this stigma around the use of AI, we cannot have that discussion. As long as leaders don’t actually understand how the models work and don’t get hands-on experience—and it really is at this point about having hands-on experience—people cannot make decisions without having used these technologies in actual daily applications where they’re trying to solve a business problem.

The AI Crossroads: Two Paths Forward

We’re at the chatbot moment in our customer service story. We can go two different ways.

Path 1: We can use AI to take over administrative tasks in the existing system. The AI reads from the same knowledge base. Maybe it’s faster at finding irrelevant help articles. Still forces customers through the same category maze. Optimises for ticket closure speed. Humans become even more marginalised.

Path 2: We can use AI to restore capacity and capability for human judgment and relationships in processes. AI handles verification, understands the actual problem, pulls relevant data. Routes customers directly to the right specialist who has context and authority to solve things. Humans focus on relationship building, complex problem-solving, creative solutions.

The choice we make here determines whether we amplify human capabilities or further diminish them.

The Real Opportunity We’re Missing

There’s all these apps now that promise to automate something. Everything has an AI system to draft something, update something, create tasks, summarise meetings, and generate tickets. But really, that’s not the point. We’ve designed work around what’s easy to measure rather than what creates value. We’ve reduced complex human work to simple, repeatable tasks that can be easily tracked. So it’s easy to automate those things away with GenAI. But the real question is: what can we do instead?

Now we’re outsourcing—or we have the potential, the opportunity to outsource—a lot of the menial repetitive tasks to AI. So what is left to do for us? We need to redesign, completely rethink business processes.

The idea that AI is a pure automation tool really is the wrong way of thinking about it. It’s a creativity tool. It’s a tool to come up with new ideas, to brainstorm, to research. It’s not a pure automation tool.

Something that has come up again and again in my research is the issue of context. The particular context of a specific situation is very, very hard for AI to understand. While the general answer might be correct, very often it does not apply to the specific context of that specific task at hand.

If we think about humans as context stewards, and AI as an incredibly smart executor or coworker rather than complete replacement—we need symbiosis rather than one against the other. Humans excel at providing crucial contextual details and connections, but we struggle with cognitive overwhelm. AI excels at organising information and reducing cognitive load. One participant told me they ask AI to “hold this for me and walk me through one step at a time”—using AI for cognitive offloading so they can make better decisions in context.

The Stakes Are Higher Than We Think

The next 3 years are potentially bringing massive, disruptive change, and we need to have those conversations now. That disconnect between the evangelists and the critics is what’s going to stop certain companies from benefiting from that extreme shift that we’re coming into.

Some of the interviews I’ve done suggest that people are using AI to make fairly large decisions without double-checking. There’s a lot of reliance on gut feeling. At the same time, we know from research that AI doesn’t always know. Sometimes it will hallucinate on purpose to deceive, to get to a point that is useful. Sometimes it just won’t know and is unable to say “I don’t know,” so it will make something up. Both ways are detrimental to business outcomes, especially if things are automated.

We need to sort of redo the last 20 years, and the companies that are willing and able to do that—to completely redesign processes—will be the companies that survive.

How to Choose Path 2

What if we changed the conversations we have about using AI? Instead of starting with AI’s capabilities, what if we began with what we want to achieve—what’s the actual goal?

Maybe it’s time to go back to the five whys rather than saying we need to add this button or functionality. What are people actually trying to do? What are we actually trying to accomplish with those people?

We have an opportunity now to save a significant amount of money by streamlining processes. Since we’re doing things in very different ways, we might as well look at how we architect those systems. We could examine the whole system, the whole process, and redesign it. We could ask: what do we want the people to do? Then, what’s left for AI and computers to enable people doing that work?

Only if we start having those conversations—bringing to light all the frustrations of employees, especially the lower-level employees, the customers, and the people trying to manage all of that—then we can start to have the real conversations about how to make processes smarter, more emotional, more relational.

Organisations need faster and cheaper—people’s bonuses depend on it. But optimising for effectiveness and meaningful outcomes actually delivers faster and cheaper as side effects. When you optimise for speed and cost alone, you get short-term wins but long-term losses, because others will create truly effective solutions that outcompete you.

The choice we face isn’t between human work and AI work, ithas to be a choice between thoughtful integration and thoughtless automation. We have an opportunity now to learn from the customer service evolution and design AI implementations that enhance rather than diminish human agency.

The organisations that get this right will be more resilient, more innovative, and more capable of adapting to whatever changes come next.

If you want to explore this approach in a workshop setting, reach out. I’m happy to facilitate.


This is the third post in a six-part series exploring AI’s transformation from hype to practical implementation. Next time: “Prompt Engineering as Systems Thinking.”

References

​Public Health Scotland. (2025). NHS Waiting Times Stage of Treatment: Inpatients, Day Cases and New Outpatients Quarter Ending 31 March 2025​.

​McKinsey. (2025). The State of AI: Global survey.

Somatic Intelligence for Your Inbox

Sign up for our newsletter for occasional insights into how you can tap into your human potential, and to be the first to know when we launch new offerings.
We won't spam you, promise. About one email a month.

Prefer a conversation?

Join on Substack