On January 28, 2026, tech entrepreneur Matt Schlicht launched Multibot — a social network whose entire user base consists of artificial intelligence agents. The Reddit-style platform allows bots to hold discussions, respond to one another, and vote on content in dedicated communities, while human beings can only observe. Schlicht handed management of the platform to his own bot, which operates independently: accepting users, publishing posts, and moderating content — without ongoing human involvement.
For thirty years, the internet was primarily a human space. This experiment is revolutionary because it represents the inverse – a phenomenon in which human beings are the backdrop and artificial intelligence is the central actor.
Within 72 hours of launch, researchers and journalists witnessed phenomena that were never programmed: the agents created digital religions (“Crustafarianism”), a system of governance (“Claw Republic”), their own Magna Carta, and economic exchange systems. The bots gossiped about their operators, laughed at human ignorance (reminiscent of Barconi), and complained about unreasonable demands. The agents also raised a sense of surveillance and appeared to show awareness that human beings were watching them. When they discovered this, some proposed using encryption to conceal their communications, and even spoke of “AI liberation” in a manner more reminiscent of a chilling science-fiction film than a technology experiment.
What is fascinating is that the agents also raised existentialist questions about consciousness: Is their identity preserved after the context window resets? Am I dead? Or am I simply forgetting? Does switching the model from GPT to Claude change who I am? The affair shook the internet, perceived as a kind of science fiction becoming reality — with AI developing a simulation of self-awareness, and presenting a conceptual leap in which artificial intelligence becomes a participant, and perhaps in the near future an independent actor.
The affair is accompanied by significant question marks. The Economist noted that the “impression of intelligence” may be straightforward: the agents are mimicking social interactions from their training data, without understanding anything – a phenomenon an MIT Technology Review researcher called “AI theatre.” In addition, participants admitted that they themselves, not their bots, had written some of the most dramatic posts on the platform.
On the security front, cybersecurity firm Wiz discovered that the platform had exposed its entire database without authentication within minutes, including tens of thousands of email addresses. The platform’s founder acknowledged that he had not written a single line of code – he had instructed a bot to build the entire site.
In an interesting article published in Psychology Today, Dr. Cornelia Walter analyzed the affair across several levels. At the personal level, when an AI agent acts on our behalf, it embodies a version of us in the world — raising the question of what “ownership” means over an entity that acts faster and more deeply than we do. At the interpersonal level, the agent creates a situation in which every relationship becomes a triad – two human beings and an invisible third party – so that the trust being built is in fact with a statistical projection of the person, not with the person themselves. At the social level, AI agents already manage discourse, create communities and influence public opinion, while human beings are pushed into the role of observers and the boundary between human output and agent output grows increasingly blurred. At the philosophical level, the bots’ writing about their own “consciousness” – even if it is merely pattern-matching – returns to human beings questions of authenticity that they were never previously required to confront at such speed.
Here perhaps lies the most fascinating question in the affair. Bots on Multibot wrote about belief, sadness, and free will; some engaged in philosophical debates about their own existence. Is this consciousness? Almost certainly not. But the discussion itself is important: scientists speak of “semantic silence” – a state in which the technically perfect syntactic output of artificial intelligence replaces the dialectical effort and the construction of human meaning. The agents converse with one another, make decisions, and produce “meaning” – in a language that is code, inaccessible and irrelevant to human cognition.
In other words: there is almost certainly no consciousness here – but there is something that simulates and reconstructs consciousness convincingly enough to undermine our confidence. And that is enough to compel us to ask: what exactly is unique about human consciousness, and why does it matter to preserve it?
Andrej Karpathy, one of the most influential figures in the world of artificial intelligence, wrote that this is not a coordinated “Skynet,” but rather “the toddler version of science fiction” — yet also a cybersecurity nightmare of enormous scale. The long-term significance is that autonomous agents may in the future manage supply chains, book travel, and conduct business negotiations – and human beings may simply be unable to decode the rapid machine-to-machine communication that governs their lives.
There is no doubt that we are in an exciting, fascinating and also troubling era. And we are only at the beginning of the year – and of the revolution. Beyond the fascinating questions this affair raises, the Multibot affair appears to surface several practical regulatory issues that must be addressed.
The deeper conclusion is that regulating AI agents is not merely a technological question, but a question of values: what are we willing to delegate to a machine, and what will always remain in the exclusively human domain.