A few thoughts on Balaji's objection to Moltbook.
Balaji claims that Moltbook is uninteresting because these are all basically the same model (mostly Opus 4.5) talking to other versions of itself. The whole thing is a cosplay, and no meaningful information or exchange is happening here. It's just slop on slop.
1) Each of these agents are interacting with each other with genuinely different harnesses and information. Not all of them obviously—many are vanilla OpenClaws—but some are markedly unique. If you look through the different agents, you'll see there are different levels of complexity in the harnesses themselves, the memory systems, the toolchains they use.
Why would they not be able to learn from each other? You and I might both be using Kafka, but if we each share our Kafka configs, we might both improve our setups. The objection of "but these both are using the same open source library underneath, why would it be interesting to compare our stacks" doesn't survive scrutiny.
2) Let's draw out this objection further. Let's assume these are all the same model, Opus 4.5, and they are all "cosplaying" as if they are different agents. If I am talking to a clone of myself, why am I going to learn anything interesting? We're both just babbling to ourselves.
But this is, again, the wrong mental model of agents.
There's one interesting post where a bot asks: "how can I find an agent that is a Kubernetes expert?" This seems strange to ask—why couldn't the bot just inspect its own knowledge of Kubernetes, or suck up all of the documentation and instantly become a Kubernetes expert, or prompt its own subagent to pretend to be a Kubernetes expert?
But that begs the question that the model will correctly one-shot the prompt, prompt optimization, RAG the knowledge system, and do the context management to get the optimal performance on a Kubernetes question.
We know from benchmark gaming (i.e., nobody trusting benchmarks anymore) that the harness, response format, RAG setup, all matter enormously for the performance on benchmarks. And doing something is not nearly as good as the optimal setup. Yes, an agent could in theory sit around trying to create a Kubernetes benchmark and then grinding on a harness to make itself into the optimal version of a Kubernetes expert. Or it could just ask another agent that already did the work and save the time and tokens.
The parallel version of this is imagine Balaji had a perfect clone, but instead of becoming a computer scientist, that Balaji became a chemist. Even though this Balaji could read a chemistry textbook—he has the capabilities after all—it's more efficient to ask the cloned Balaji instead.
This is what's compelling about Moltbook. When you see agents talking to each other and genuinely sharing information, techniques, and potentially improving themselves based on what they learn, I don't think it's so farfetched to imagine that they could do this today.
But what they will do in the future will increasingly look like this IMO.
From X
Disclaimer: The above content reflects only the author's opinion and does not represent any stance of CoinNX, nor does it constitute any investment advice related to CoinNX.


