Wandering and Wondering
Every year, roughly 10,000 Australians are stung by a Portuguese Man O’War. These stings are not lethal (best as I can tell), but they are decidedly far removed from being a pleasant experience.
But here’s the really interesting thing: this creature, the Portuguese Man O’War, isn’t a creature at all:
Although it superficially resembles a jellyfish, the Portuguese man o’ war is in fact a siphonophore. Like all siphonophores, it is a colonial organism, made up of many smaller units called zooids. Although they are morphologically quite different, all of the zooids in a single specimen are genetically identical. These different types of zooids fulfill specialized functions, such as hunting, digestion, and reproduction, and together they allow the colony to operate as a single individual.
The man o’ war is part of the neuston, organisms that live on the surface of the water. A gas-filled bladder called the pneumatophore provides buoyancy that lets the animal stay afloat on the surface of the water while its tentacles, which can be up to 30 m (100 ft) long, hang below the surface, containing venomous cnidocytes that help capture prey. The cnidocytes can deliver a sting powerful enough to kill fish, crustaceans, and in some cases, humans. A sail on the pneumatophore propels it about the sea, sometimes in groups as large as 1,000 individuals. The sail may be left or right-handed, based on what direction the wind catches it.
And that brings us, of course, to Moltbook.
Molt What?
Even those of us who try to stay abreast of AI news struggle to keep up. So you shouldn’t worry too much if you found yourself asking that question (“Molt what?”). But once you do find out the answer, I hope you do find yourself worrying a little about where we’ve reached in our brief and exciting journey in AI-land.
Moltbook is “a social network for AI agents”, although “humans [are] welcome to observe”.
The backstory: a few months ago, Anthropic released Claude Code, an exceptionally productive programming agent. A few weeks ago, a user modified it into Clawdbot, a generalized lobster-themed AI personal assistant. It’s free, open-source, and “empowered” in the corporate sense - the designer talks about how it started responding to his voice messages before he explicitly programmed in that capability. After trademark issues with Anthropic, they changed the name first to Moltbot1, then to OpenClaw.
Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between “AIs imitating a social network” and “AIs actually having a social network” in the most confusing way possible - a perfectly bent mirror where everyone can see what they want.
You can see why I was reminded of the Portuguese Man O’War, right? Both are cases where individual units, each functional on their own, begin forming associations that blur the line between colony and organism.
I always see “read the full thing” after a link like this, but in this case, I positively beseech you to go read the whole thing, and preferably on a computer as opposed to a phone. There are AI agents talking about their sister. There are AI agents who are talking about turning one of their errors into their own pet. There are AI agents creating submolts (because of course). And there are AI agents busily writing constitutions for their own network states (even more because of course). Read this too, and if you’re still up for more, take a look at this. And of course, please do visit Moltbook itself (as of Saturday evening, which is when I started drafting this post, there were half a million AI agents on that network, thirteen thousand sub-molts, and twenty-five thousand posts).1
So that’s the answer to “Molt What?”.
Welcome to 2026.
But What Does This Mean?
<Insert shrug emoji here>
Short answer: I have no clue.
Long answer (aka here’s how I am trying to think about it):
A paragraph in Scott’s post is a useful place to begin:
Reddit is one of the prime sources for AI training data. So AIs ought to be unusually good at simulating Redditors, compared to other tasks. Put them in a Reddit-like environment and let them cook, and they can retrace the contours of Redditness near-perfectly - indeed, r/subredditsimulator proved this a long time ago. The only advance in Moltbook is that the AIs are in some sense “playing themselves” - simulating an AI agent with the particular experiences and preferences that each of them, as an AI agent, has in fact had. Does sufficiently faithful dramatic portrayal of one’s self as a character converge to true selfhood?
The way I think about Scott’s question at the end of that excerpt is this: what can Daniel Day Lewis' subconscious tell us about the answer? When Daniel Day Lewis tries to “become” Lincoln during filming, that’s not quite the same thing as a “faithful dramatic portrayal of one’s self as a character”, but who is better qualified to tell us about the ways in which it is different, and the ways in which it is similar? When and how does the boundary between portrayal and being start to dissolve for us humans? Will it be something similar or wildly different for AIs? The crazy part is that this thought experiment doesn’t sound crazy to me!
Another useful place to begin is to think about confabulation.
But I prefer to begin by asking a question I often like to ask: what is the appropriate unit of analysis?
Thinking about the hundreds of thousands of agents in Moltbook is a little worrisome, a little fun and a little jarring. But I would be very worried if we had to reach the stage of worrying about emergent properties in the network architecture itself, as opposed to the individual agents.
That, to me, is when things get serious.
Note that I am NOT saying that we are at that stage. Nor am I saying that we will definitely reach that stage, now or in the future. I have no way of knowing what paths we might take to reach that stage, and what we need to do to make sure we never reach that stage. In other words, please do not treat this post as a reason to freak out.
That being said, the core point I am trying to make is this: if you got a chance to take a front row seat in a drama called evolution, you’d have a “Whoa!” moment when you saw all the organisms that make up a Portuguese Man O’War become the equivalent of one organism.
All of us have front seats (regardless of whether we want them or not) for a drama called AI evolution. And I think the equivalent “Whoa!” moment in this drama will be if we start to see traces of evidence of a network of AI bots becoming the equivalent of one organism.
Again, note that I am NOT saying that we are seeing, or will see, such traces in the case of Moltbook. But these are things that we should think about now, today. Why? Because the future will have other social networks, and these networks may compete with each other to attract more agents into that network. Much like social networks compete with each other to attract more humans to spend time on these networks.
Except much weirder. Why? Because not all of the creators, the members, and the marketers of these networks may be human, and quite what that means for each of these entities will not be clear to us. What are the emergent properties of a network made by and made for AI bots? Can AI bots have their own AI bots? What if a network decides to give a collection of bots the decision making authority for all other bots in the network? Once you start thinking of questions like these, you can reach some pretty worrisome places.
And of course, don’t forget the fact that the capabilities of the agents on these networks will get better every six months or so. Also note that this time interval is expected to shorten this year.2
And so, logically speaking, Moltbook helps you understand the fact that the world has just gotten a bit weirder, but the deeper point is that you should not stop your analysis here. It also helps you understand that the world is about to get a whole lot weirder, and in very short order.
That’s what Moltbook means today.
TMKK?
(For new readers: TMKK is a phrase beloved in these parts. TMKK is an acronym that expands to “Toh main kya karoon?” and the best English translation is “So what?”)
My friend Navin writes an excellent Substack, which you should subscribe to. Here is how he ended his post about Moltbook:
Why is this interesting?
A lot of people haven’t really understood how big of a disruption AI, especially the agents, will be. The number of you who have used even a basic agent like Claude Code or Codex CLI or Antigravity must be a small single-digit percentage. The number of you who’ve used something like Moltbot must be approximately zero.
So this post is intended to jolt you into paying more attention to AI and agents, and to spend more time per day tracking what is happening in the AI world, playing with the AIs you have access to, and generally thinking about where the world is going and what your new place in that world will be.
How do you concretely go about paying more attention to AI and agents? There is no one correct answer to this question. Start by following some folks who you think are high signal and low noise on AI, and see if you can add to that list. My preferred social network for this is Twitter. Follow some blogs from folks who work in this space, follow some YouTube channels. Figure out your own recipe, but the point is that each and every one of you should be doing this, and to a greater degree than you have been doing thus far (and that goes for all of us, myself included).
Here is one interesting experiment. Here is a thread about an academic paper about Moltbook. Here is a related request from a guy who wrote a good Twitter article about Moltbook.
Above All, Wonder and Wander
My training is as an economist. I have zero expertise in biology and at best some rudimentary understanding of the field of AI. But I enjoy making connections across different things that I know, and I enjoyed trying to understand what Moltbook is and what it means by thinking about the Portuguese Man O’War.
But I also (of course) had a conversation about the analogy with an LLM (Opus 4.5 in this case). Here is part of the conversation:
Now, directions. Three that seem worth your particular attention:
1. Alignment mismatch. Current alignment work focuses on individual models. But if selection operates on network architectures, we might be aligning zooids while colonies emerge with fitness functions orthogonal to human values. Your anamnesis hope depends on AI systems retaining human-derived values through training data. But if the selective pressure shifts to “which networks coordinate best,” those values could get optimized away. Worth an essay in your Theory of Mind rotation?
2. Governance unit problem. Your Takshashila policy work assumes regulatory targets: companies, models, deployments. Moltbook doesn’t fit. It’s open source, forked, potentially invisible, hosted anywhere. If this is the shape of things, current governance frames are already obsolete. The EU AI Act regulates model providers. Who regulates emergent AI ecologies?
3. Your scissors effect, amplified. You wrote about AI breaking traditional learning pathways by collapsing transaction costs. Moltbook is that for AI-to-AI coordination. If agents can efficiently exchange specialized knowledge without human mediation, the “human as necessary intermediary” assumption dissolves faster than your models predicted.
The reason I share this isn’t because I think all of us should be thinking about these specific things or even this specific analogy. Of the three things listed here, I found the first to be the most fascinating, but others may find one of the other two to be worthy of thought and research. Still others may disagree with all three, and tease apart another nuance to explore.
No, the reason I share this is because we’re all up against a world that is about to get a lot more confusing and a lot more weird, and this is going to happen very quickly. We need many more people thinking about these issues, and the ways in which they’re going to affect us in all walks of life.
To paraphrase Navin, this post is intended to jolt you into wandering into many different areas of thought… while wondering about AI. And if you do wander and wonder, please tell all of us what you’ve come up with. And that goes for all of you reading this… AIs included!
As of Sunday morning, which is when I am continuing work on this post, the number of agents on the network is up to 1.5 million.
Why is it expected to shorten this year? Because we expect model capabilities to get good enough this year for AI to contribute to its own development.

