So it's 2025 and many of us slowly started to begin our year. We've wrapped up the holidays maybe even had an extra day off before returning to work. And might want to think about what lies ahead. Around this time, a lot of us set big resolutions and then promptly break them. Immediately the year starts with a failure. Instead, I think of the year change as an opportunity to reflect. Looking back and looking forward. If you haven't yet taken the time to reflect on the last year, I can highly recommend taking an hour or two for yourself and completing the Year Compass ðŸ§, a fantastic free resource.
I'm halfway through an exciting new venture. Last September I started an MSc in Organisational Psychology to explore how the current changes in technology change how we live and work with each other. I'm excited to bring more insights to you and your organisations. You'll notice I might cite more scholarly articles in the future. And I'll need your help, more on that later.
Let's take a quick look back. 2023 was our collective OMG moment with ChatGPT - that instant when everyone, from CEOs to schoolchildren, realized AI wasn't just science fiction anymore. We crossed a point of no return, and the hype train left the station at full speed. You might remember all the ads and "experts" trying to sell the "Best 1000 Prompts for ChatGPT". 2024 became the year of practical experimentation: chatbots popped up everywhere, marketing teams churned out auto-generated content, and almost every app suddenly had an AI assistant that would help you rewrite copy. Looking at you, LinkedIn. These clumsy and hurried uses of AI are a reflection of how little we understand about the potential of GenAI. Last year, we also got familiar with that uncanny feeling of AI-generated images. You know the ones - they look almost right, but there's always something a bit off about them, a distinctive style you can't quite put your finger on. I had the pleasure to teach people from all walks of work and life how to be creative with AI, and still, a lot of it came back to using AI for brainstorming because it just isn't that creative itself (yet ?). Meanwhile, video avatars quietly crossed the threshold from "obviously fake" to "good enough" for corporate training and presentations. I got to do a research project for a client comparing dozens of video avatar platforms, and the spread was shocking. From non-existent privacy protection to simply unusable video only two platforms made the cut, thanks to respecting privacy and data security and delivering good quality videos: Synthesia and close runner-up Elai. Unboubtably there will be more and better platforms coming up. Deepfakes became something anyone could create with a few clicks, and they have already had their prime time in the US election.
Now, as we enter 2025, we're moving beyond these individual tools toward something more profound: AI that acts on our behalf, making decisions and taking actions across our organisations. It's no longer about experimenting with isolated tools - it's about implementing AI throughout our entire workflow, transforming how we operate at a fundamental level. Google demoed Gemini 2.0 as a fully multi-modal model that can reason by itself, make plans and execute them step by step, even take action on your computer screen and check its own work, before it replies. Suddenly, AI can do things like editing images step by step while retaining parts of the previous image or executing tasks in real-time on your behalf, something that was almost impossible for a long time. Even Panasonic enters the picture by partnering with Anthropic (Claude) and investing millions in both a platform and individual offerings that focus on AI-powered wellbeing, health and care solutions. Meanwhile, Microsoft launched Co-Pilot Agents last autumn. Now, specialised chatbots based on specific data of your team, department or company are available to every business with an Office 365 subscription. They can pop up on your website, or collaborate in teams just like a human would. All these examples aren’t necessarily technology that didn’t exist before, but the ease of access is a huge shift. This isn’t just experimental anymore. The last months of 2024 have started to show us where this is all heading: from isolated tools to seamless collaboration between specialised AI agents that can actually get things done.
To help us with this shift, we might want to return to a question Martin Heidegger asked in the mid-20th century: What is the essence of technology? Not the mechanics of how it works, but the ideas it brings with it - the ways of seeing the world. Heidegger traced this back to a fundamental shift in how we relate to nature. Where once we experienced the world as a place of wonder, working with and attuning to it, we gradually moved toward seeing everything as measurable, predictable, and ultimately - extractable.
He used the example of a river. Before technology, a river was just a river - an untamed force of nature, many things in one. Once we place a hydropower plant there, we transform its whole being into a power resource, something to be harnessed and extracted. This mindset of extraction, Heidegger warned, wouldn't stop at nature. Everything would become what he called "Bestand" - resources waiting to be extracted, including human labour.
Looking at the world today, we might see what Heidegger foresaw anywhere from the gig economy to subscriptions for everything, or fast fashion. Humans are extracted for "value" just like we extract resources out of the ground. Here's where Heidegger's warning becomes particularly relevant. For decades, we've trained people to work like robots - closing tickets fast, responding to notifications, and thinking in isolated processes. We've gotten so good at systemizing our processes that we've limited our ability to think in broader contexts. We've turned humans into extractable resources, valuable only for their specific outputs. For example, when I did my undergrad, I studied to be a "Designer", someone who could look at a problem and solve it with any tool or solution necessary. Nowadays we split design into dozens of micro-specialisations. The UI Designer who makes buttons pretty, the UX Designer who decides where the buttons should go, or the UX Researcher who talks to people to learn what buttons are needed.
Now AI is becoming better at these menial, repeatable tasks than the average human. And for the first time in modern history that includes – even focuses on – white-collar work. We see it in customer service, in content creation, in data analysis. Open AI just announced they now know how to build AGI and Elon Musk and others are even building humanoid robots to handle the manual tasks we've relegated ourselves to in the name of efficiency. The more specialised we've become, the more replaceable we are in an AI-driven world.
But Heidegger suggested that within this danger lies hope. The Greeks, he reminded us, thought of technology as "techne" - art, the revealing of something new. Not just aesthetics, but the art of movement, poetry, condensing meaning in ways that haven't existed before. Art in the Greek sense of techne is the mastering of the creation of meaning that hasn't been seen before, an unveiling or revealing of a natural truth. As AI takes over the extractable parts of our work, we might find space to return to this original sense of art.
While AI can generate images and text with impressive speed, it operates within the paradigm of extraction and probability - creating average, repeatable outputs based on existing patterns. It cannot, in the Greek sense, create art that reveals new meanings, that moves us in unprecedented ways, that makes us think about things differently. After all, it has been trained on the material we produced in the name of extraction and efficiency. It's read more marketing material and output of commercial enterprises than it's read philosophy, and it simply can't – yet – get bored and play around aimlessly to come up with new ideas.
This might be the real opportunity of 2025. As AI handles more and more systematic, extractable tasks, we might find ourselves free to explore what it truly means to be human. Not in the realm of efficiency and extraction, but in the space of creation, failure, and exploration - doing things because they matter, not just because they're valuable.
So, will 2025 bring us the first wave of hiring freezes and layoffs of white-collar workers? Will it make our work more interesting by taking care of the boring tasks? Will it make us face what it means to live in a post-truth world? Only time will tell.
What questions are you holding as you enter 2025? I've added a short three-question survey - I'd love to hear your thoughts. your answers will directly influence my research project for my MSc. Thank you already for sharing your thoughts. And if you'd like to explore what these developments might mean for you and your team, reach out. I'm designing new talks and workshops not just about working with AI, but about rediscovering what makes us human in a world where efficiency alone no longer defines our worth.
Share your ideas in this 3-question survey
Here's to an artistic, connected 2025.
Some links above are affiliate links, which might get me a kickback at no cost to you, should you sign up for a paid plan.
Update: 8 Jan 2025, added three more links to current developments and fixed typos.