What do you want to control? And what do you want to be surprised by?

Questions I got to ponder ad nauseam last week, as I learned to give in to food poisoning and the utter lack of control that comes with it.

Creativity, in fact most things we might call our “purpose”, is a dance of control and chaos. And as our AI tools grow increasingly sophisticated, we get to figure out — scrambling — how they shape our creative processes and where we, as humans, fit into this new paradigm. For a long time, we thought creativity was something uniquely human. And I use creativity as an inclusive term here. Most work is creative in some way, even if it's not about the classic "making things look pretty". Every day we face unknowns and challenges and have to come up with ideas and solutions.

In this process, we're switching back and forth between thinking convergently or divergently. Often intuitively. When working with AI, being very explicit about this might be the key to making AI work for us. Are we trying to come up with new ideas, or are we trying to narrow them down and make a decision? The answer lies in finding the delicate balance between control and uncertainty, between holding on and letting go.

Embracing Divergent Technology

As we explore how we interact with AI, it's important to understand the fundamental differences between human and artificial intelligence. Something tickled my brain when I was watching YouTube the other day and heard Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting.” The idea is – very simply put – that our brains operate on a set of biological rules that shape neural structures and give rise to our unique cognitive abilities. These rules allow us to make complex connections, leap between ideas, and generate novel insights. The brain adapts through plasticity, repurposing neurons when needed, constantly rewiring and changing its physical structure.

In contrast, the "neurons" in a language model (LLM) like GPT or Claude are fixed in their structure. While these models excel at processing vast amounts of data and identifying patterns, the underlying mechanisms of how they create meaning in their multi-dimensional token space remain largely opaque. The sense-making process in LLMs is fundamentally different from the way our brains work.

Here we return to our question of control and surprise. By delegating certain cognitive tasks to LLMs, we can free up our human brainpower to focus on the things that truly matter – creativity, intuition, and emotional intelligence. How might we understand and leverage the strengths of both human and artificial intelligence, while remaining aware of their distinct limitations?

Lastly, not every LLM is made the same, while OpenAI, Microsoft and Meta are racing to achieve AGI (artificial general intelligence), by creating larger and larger models, Apple just announced their own strategy in AI combining tiny LLMs that run on-device with larger models running in the cloud. And while those tiny LLMs that run on your iPhone are likely less capable than ChatGPT when it first came out, you're in total control of your own data. Apple Intelligence brings a different philosophy to AI, one that focuses on getting simple tasks done, assisting with mundane-but-personalised questions while retaining total control over user privacy and only delegating to big brother GPT-4o when the on-device capabilities aren't enough for the task at hand. Maybe a large network of tiny AIs could soon become even more powerful – or useful – than a single large model? We might have to wait and see.

Becoming the Human in the Loop

The concept of "human-in-the-loop" AI recognises that while AI can be a powerful tool, it is not a replacement for human judgment and creativity. If you're not familiar with the term, this podcast by Boston Consulting Group goes into detail on the idea, while exploring an interesting 2030 vision. And it's got an AI co-host, that's actually pretty good.

When we think about how a human-in-the-loop system might work, we quickly run into the tension between control and serendipity, structure and gut feelings, technology and biology. The human-in-the-loop becomes the safety net for those who aren't in the loop. And the key to success might lie in being very intentional about how we design the loops we get ourselves into. It's a dance between holding on and letting go.

If the speed of AI accelerates how we work, regulating our nervous system and finding moments of calm amidst the chaos will be absolutely crucial for our well-being – and most likely our success, too. How can we be more aware of what a task at hand needs? Are we thinking divergently and making intuitive connections? Are we analysing data to find new answers? Are we stuck on an empty page and need help getting started? Are we following a well-established pattern, or encountering something completely novel?

Over the next few days, notice when you're switching these modes. Be really curious as to what you are trying to do and what kind of thinking might be needed. What happens when you're collaborating with an AI on structured work? What happens when you're collaborating on more intuitive, creative or novel tasks? And what are the "loops" you get into with your AI tools? What do you get to control? And where does the AI surprise you?

The fuzziness of uncertainty

I've been obsessed with the word fuzzy lately, you might remember the fuzzy attention we talked about last month. On the podcast UNCERTAINTY: The Surprising Power of Being Unsure, Maggie Jackson talks about her newest book with the same title. This context gives fuzziness a whole new power and depth.

She talks about the concept of "generative uncertainty". The core distinction she makes is about uncertainty in the world and how we live with uncertainty. There's uncertainty about what's going to happen tomorrow, how society behaves, the stock market, or a job or home situation– the abstract uncertainty. On the other hand, there is uncertainty as we live it on a somatic level. How does our biology respond to an uncertain world? This felt sense of uncertainty – the "fuzziness" – is a uniquely human capacity that allows us to be creative in the face of the unknown. This somatic kind of uncertainty can be very empowering and generative.

The goal is not to eliminate uncertainty altogether, but rather to learn to embrace it as a source of creativity and innovation. When collaborating with AI, it's crucial to remember that the ultimate aim is to create something for humans. Start by taking a moment to define why you're engaging with the AI and what specific insights or assistance you need. Treat the AI as a versatile partner – a brainstorming buddy, an analyst, or a pattern recognition expert. Don't expect the AI to hand you the perfect solution; instead, anticipate that it will provide valuable information to guide you closer to your goal. And always remain open to the possibility that the AI might hallucinate or generate inaccurate information at any point in the process. By approaching human-AI collaboration with this mindset, you can harness the generative potential of uncertainty while maintaining a clear sense of purpose and direction.

The most valuable insights often emerge at the edge of your comfort zone. How might you find balance between control and uncertainty in your creative work?

Come Dance with Me

Remember, this is a dance – and our dance partner is still a little clumsy, you might get your toes stepped on a few times. But by learning to embrace the generative uncertainty that comes with collaborating with AI, we can unlock new forms of creativity and innovation.

If you're curious about exploring uncertainty and what it means for yourself, your work and your life, maybe Somatic Coaching might be for you. Leave me a comment or book free discovery call.


Photo by Nikolaos Dimou from Pexels

Somatic Intelligence for Your Inbox

Sign up for our newsletter for occasional insights into how you can tap into your human potential, and to be the first to know when we launch new offerings.
We won't spam you, promise. About one email a month.

Prefer a conversation?

Join on Substack