I have been working in the AI field for several years, long before GPT entered the mainstream. Yet it is hard to deny that the past two or three years have marked an exceptional phase.
The key word, for me, is acceleration.
The first major one was in November 2022, with the massive adoption of Chat GPT around the world. Then came others: systems that could plan and act, models that blurred the line between creation and execution, and interfaces that turned experimentation into something almost frictionless. Each of these moments shifted expectations, lowered barriers, and compressed timelines in ways that were hard to imagine just a few years earlier.
What I saw with Moltbot, initially launched as Clawdbot, felt like one of those moments.
Unlike classic chatbots that wait for prompts, Moltbot is an open-source AI agent that runs on your device. It is designed to act rather than simply respond. It can connect to messaging platforms such as WhatsApp, Telegram or Signal, manage reminders, automate workflows, interact with local files and applications, and maintain memory over time. It runs continuously and can execute concrete tasks on behalf of the user.
This shift matters. Proactivity, autonomy and deep integration into real workflows change the nature of interaction. They move AI from being a tool we consult to something we increasingly delegate to, often without the constraints typical of more closed ecosystems such as those of Microsoft, Google or Apple.
What makes this even more interesting is that agents like Moltbot do not exist in isolation. They are increasingly designed to interact with other agents, sometimes through shared platforms that resemble social networks, sometimes through direct communication channels. We are starting to see environments where AI agents post, comment, reply, vote and coordinate with one another, while humans mostly observe. In some cases, these interactions happen through encrypted channels, adding a further layer of opacity. The result is not just automation at scale, but the emergence of small ecosystems of agents that influence each other’s behaviour.
At this point, we are no longer talking about individual assistants reacting to user input. We are talking about networks of agents that can exchange information, reinforce patterns, and develop dynamics that are not explicitly scripted. Even when everything is technically deterministic, the overall behaviour becomes harder to predict and harder to govern.
This is not magic, and it is far more complex than it appears. But it tells us something important about how quickly qualitative shifts can emerge once certain thresholds are crossed. And I increasingly suspect that the next couple of years will deliver far more than we currently expect. Not necessarily in linear or comfortable ways.
That said, what surprises me just as much is something else: how little time we are taking to reflect.
We are letting this happen with very limited ethical discussion and weak regulatory frameworks (especially outside Europe). The dominant logic is that of a rat race. A collective rush where nobody wants to be the one who slows down, and nobody wants to step aside to ask whether some limits should exist.
Look at Davos and at the main players in the field. The signals are there. The concern is there. And yet collective behavior does not change. If anything, it accelerates further.
So, am I scared?
Yes and no. I am not particularly afraid of AI itself. But I am worried about humans. About our nonchalance. About the casual way in which we normalize speed, delegation and power transfer without fully understanding their consequences. AI is moving faster than ever, and we are running alongside it, without stopping to ask where, exactly, we are going.
I tend to say to students and clients that the horses have bolted. Lately, though, it feels more like a genie out of the bottle.
And once the genie is out, you cannot put it back.


