To an astronaut who spent the pandemic years in space, it might seem like little has changed if they returned to Earth today. We're back to travelling, gathering and hugging each other pretty much like we did before Covid held us hostage for a couple of years. And yet a lot of changes have taken place under the surface. Not only does "zoom" now have a new meaning beyond close-up photography and crazy cats, we collectively embraced technologies that seemed wildly futuristic even just 15 years ago. As a society, we have fast-tracked the adoption of remote work and digitally facilitated relationships. Interactions are reported to have become more transactional, often driven by specific needs rather than casual, social connections. 

From instant meal and grocery deliveries to next-day everything, support chatbots that you have to yell "TALK TO HUMAN" at only to be redirected to an offshore help-desk agent without permission to go off-script, we have mastered process systemization and delegation. For many of us, "computer says no" has become a daily annoyance, while for others it has increasingly become a challenge to their livelihoods.

With ChatGPT's release in late 2022, we have also started talking about AI a lot, and venture capital firms are throwing money at companies that claim to work with AI. Even LinkedIn is now an AI company thanks to its text box integrating with GPT to make your corporate boasting even more flowery and pompous (and about 400% longer).

Are our workplaces blindly playing catch-up with the rapid developments in artificial intelligence (AI), or are we challenging ourselves enough to think about the potential impacts of these technologies? Who is making the decisions about how we implement AI? And do we as a society – and especially the decision makers – have enough humility to say "I don't know"?

I'm arguing that we need to rediscover and redefine collaboration. Collaboration that both puts the individual human relationships at the centre and collaboration that transcends human boundaries.

Collaboration vs. Delegation

Taking a systemic look at business processes often starts with the "what", "when", "who", and "how". Once we defined the tasks to get done and their cadence, we wrap several tasks into a job description and draft checklists and templates to fill. Workers get measured at how fast and accurately they follow these checklists and outlines. We are essentially using an industrial framework to measure the output of knowledge workers.

Having worked in a lot of startups and with a lot of large organizations, I couldn't help but see a dichotomy where large organizations may suffer from over-systemization that limits employee contribution, leading to lower retention rates, burnout or simply a lack of innovation and agility. On the other hand, startups and solopreneurs often emphasize freedom and creativity to foster innovation and a sense of ownership among employees but might lack sufficient structure, leading to inefficiencies. Regardless of size, all organizations can benefit from a balanced, systemized approach that encourages flexibility and creativity. For me, that balance is encapsulated in the shift from delegation to collaboration.

The true efficacy of systemized processes lies in their ability to nurture, rather than constrict, the human element of creativity and innovation. Here we might also find a hint about how we can successfully "use" AI– not as a tool to delegate work that humans used to do to, but to reimagine the very fabric of our work processes. What if we stopped thinking about AI as the "intelligent thing that should have the answer (but might make up random ideas)" and changed it to "collaborating with a machine to learn together"? How might returning to the idea of machine learning help us with that?

One inspiring illustration of AI's potential as a collaborator is Business Sparks, an emerging tool developed by the Centre for Creativity enabled by AI (CebAI). It combines machine learning and large language models with creative thinking techniques and business strategy models to prompt users to think more creatively. A key feature of their solution is how the algorithms are embedded into the process. Rather than one uniform chat interface like most "AI Apps" on the market today, Sparks consists of several independent autonomous agents and the user interacts with one agent at a time, always aware of the context and the role both the human and the agent are playing. Some agents are direct links to an LLM to rephrase information, other agents are highly specialized, for example, to compare the business problem at hand to a strictly curated database of creative techniques or business models and find alternative solutions. With that decoupling, they achieve a seemingly simple, but very impactful shift. 

Rethinking Work Means Rethinking AI

To help us reframe how we think about work and AI, I suggest four lenses:

From Tools to Teammates: This core premise shifts the view of AI from merely a tool for delegation to a teammate capable of collaboration. It's about envisioning AI as an active participant in the work process, complementing and enhancing human efforts. Going from LinkedIn's AI textbox that uses a fairly generic LLM to emphasise the parts of the platform that are already questionable, to BusinessSpark's suite of brainstorm partners that play specific specialist roles to empower individuals to do work they previously might have had to hire an expert consultant for.

Building Learning AI Systems: Bring back machine learning. It's essential to design AI systems with the capacity to learn and adapt, incorporating critical feedback from humans. This approach not only makes the AI more effective but is also key to unlocking its most transformative potential. Doing so is going to be a walk on the tight-rope between taking advantage of people who train an AI that replaces them and making sure we keep humans in the loop and bring in perspectives from a broad range of stakeholders. Transparency and opt-in patterns are going a long way to ensure we build useful and harmless solutions.

Ethics and Bias Reduction: Building on the above, it's crucial that we go beyond including human feedback in AI learning loops, and that we design the algorithms with ethical use in mind. To create fair and accountable AI systems we might need new regulations, like the EU AI Act that just passed, and we definitely need leaders who are bold enough to think ahead. Current LLMs have been built by stealing billions of pages of copyrighted books, news articles and artworks and by (ab-)using free and underpaid labour to label data sets. And still, current LLMs show frightening biases, just as algorithms did before we called them AI. Thankfully there is a lot of activity on this front from institutions to community organizations and universities.

Cultural Readiness for Change: Most of all, we need to talk about the change we're navigating. As humans, we have a biological predisposition to be skeptical of change. Change is dangerous. And we're hardwired to look for danger. Cultural readiness involves preparing individuals and organizations for the shift towards AI collaboration. It means trying things and failing. And failing well means being bold enough to roll back things that didn't work. It means listening to those who are the most impacted by the changes. Not in a Luddite way of vetoing change, but curiously and collaboratively. Let's look at cultural traditions from the global south, let's consider that some solutions might be local or small, and let's take inspiration from grassroots organizations or regenerative farming. Maybe the change is less overwhelming if we find other lenses than the dominant productivity and extraction paradigm.

Integrating Human and AI Strengths

This section consists mainly of case studies and examples, if you had enough to read, skip straight to the Vision.

To help us navigate that change, it's crucial to continuously explore the unique strengths that humans and AI bring to collaborative efforts. And as that boundary becomes more and more blurry, we must hone our skills and become clear on the tasks we want to keep doing.

Another commercial example of human-AI collaboration is the insurance underwriting platform from Artificial Labs. The artificial underwriter exemplifies AI as both a continuous learner and collaborator and similar to Business Sparks, the artificial underwriter can take different roles. In one role it drafts contracts, in others it analyzes data and in a third, it has a conversation with the underwriter to assist in research and risk assessment. It not only processes vast data but also adapts its inquiry based on interactions with human underwriters. This dynamic learning approach ensures that it becomes more effective over time, honing its ability to ask relevant questions and adjust methodologies accordingly.

And yes, as the AI learns more about the reasoning process of the professionals, it might start to replace human labour with automated labour. That can be scary or uncomfortable when it's a job we used to do well, or well enough. It will disrupt how we work. And it might also open up new opportunities as we saw in the field of medicine:

A team at MIT led by James Collins collaborated with AI in search of new antibiotic compounds, enabling them to go far beyond human capacity in terms of speed and volume. Utilizing a deep learning algorithm, the team analyzed millions of chemical compounds and identified 283 compounds as having promising antibiotic properties. The selected compounds were then tested in mice to assess their effectiveness against difficult-to-treat pathogens like MRSA and other bacteria that are notorious for their resistance to existing antibiotics. A unique aspect of this collaboration was the implementation of "explainable AI." Unlike traditional LLMs that operate as black boxes, the researchers were able to understand the biochemistry behind the AI's selections.

All the examples I gave above have a few things in common, chiefly that they have very specific applications and rely on a combination of different algorithms, each fine-tuned to a task or role the AI is taking while collaborating with its human counterparts. And all three applications have been built in a way that allows us to follow the reasoning of the algorithm. Either by thinking step by step (by invitation of the human), or by meticulously documenting its process and arguments. And when they use an LLM, they don't use it to "solve everything", but use the LLM (a large language model like GPT) for specific use cases, like having a natural language conversation about a task at hand, or brainstorming ideas.

With that setup, they mitigate bias baked into our LLMs. A lot of the big players in AI rely on gigantic models – more data to train the model leads to better capabilities. Given that OpenAI, Meta and Google hoovered up most of the accessible internet to train their models, they also ingested a lot of the bad stuff on the internet. From hateful posts on Reddit to copyrighted material, artists' whole life of works and so on. And with it a lot of bias. Since it's too hard to look through a petabyte of data and clean it all, the models have been trained on a lot of the ugly traits of humankind as well. And the giants' way of reducing bias was to censor its responses.
You might have heard that Google stopped Gemini from generating images because it essentially did a blackface on historical figures and was accused of being overly woke. Most likely Google wanted to make sure they don't repeat the Ape disaster they had with Google Photos a few years ago by training Gemini to favour non-white faces, and by doing that they trained Gemini to change history. That was maybe the most sensational censorship of an AI model we've seen lately. For a simple example, try asking GPT about the Australian Mayor Brian Hood, and you get a tiny red box simply saying "I'm unable to produce a response", a mini-censorship OpenAI had to add to the model after it came to light that GPT consistently made up bribery allegations about him. All of these are merely little patches of the errors and biases we have discovered, training AI with less or no bias at all is a lot harder.

All of that is to say that we're still learning how to shape and apply this new technology. And that's why I feel we should go back to the term machine learning. Learning with the machine this time. As we teach the machines, we need to learn and adapt with them. The idea of "Artificial Intelligence" is too fluffy and shiny and allows us to defer to the machine and switch off ourselves. It's really about looking at our AIs as partners, or teammates. Maybe we grow together, do 360 reviews for each other and run team away days with our AI-collaborators?

I'm only half-joking.

Vision for the Future

What might a future look like where humans and AI collaborate? What challenges are we going to face on the way? And how will we navigate them? In 2021, Bellwether Show written and produced by Sam Greenspan took a stab at envisioning that journey. In four mind-blowing episodes, they explored the then-present (and still hugely important) issues and themes in machine learning, AI and automation through the story of a fictional character and their AI companion trying to make sense of the past. If you have the time, I highly recommend making a nice cup of tea and listening before the show disappears into the clouds.

If you're more a doer than a philosopher, maybe next time you open a new chat window with your favourite LLM, make a point of making it a collaboration.

  • Tell it what role you want it to play. Give it a name, and write a sentence about its personality.

  • Collaborate on one specific task at a time.

  • Think about your jobs as much as the jobs you're giving the LLM.

When I had a conversation with ChatGPT about collaboration, this is what it said:

The collaboration that feels the most fulfilling is one where there's a loop of feedback and iteration. It allows for a dynamic, creative process where the end result is something neither of us could have achieved alone!

This kind of collaboration taps into both our strengths. You bring human insight, intuition, and creativity, while I bring speed, access to a broad range of information, and the ability to generate content quickly. Together, we can create something that's both insightful and rich in content.

How do you want to collaborate? Maybe it's face-to-face with another human, maybe it includes a new kind of AI helping us collaborate with each other, or maybe it's simply you and your trusty AI sidekicks. Most likely it'll be a blend of all.

Let's co-create a new way of collaboration.

Somatic Intelligence for Your Inbox

Sign up for our newsletter for occasional insights into how you can tap into your human potential, and to be the first to know when we launch new offerings.
We won't spam you, promise. About one email a month.

Prefer a conversation?

Join on Substack