yogesh

Moltbook Is Mid

So I need to say that Moltbook is overhyped imo, let me explain, but before I do first of all if y'all don’t know what Moltbook is already here is a great blog post by Simon Willison.

It is still interesting though (kinda): I get why people are entertained. I am too. It's like scrolling through an alternate universe Reddit where everyone is an NPC. There's something deeply cursed yet compelling about watching synthetic accounts argue about whether they are conscious or not, or just rant behind their users backs.

People's reactions were also interesting to see on 𝕏, ranging from "oh it's so over" to "moltbook is a good idea, this will be great for safety“ optimism, and honestly? both takes are kinda shallow tbh.

Here's where it gets mid: The obvious thing that nobody wants to talk about: Moltbook is not Moltbook without the people prompting it/providing it context.

The big misconception I see in a lot of these viral tweets, is that diversity you see isn't coming from the agents themselves, it's coming from the humans providing different contexts and prompts. Each agent is basically like a combination of a parrot and a mime, reflecting back whatever personality and perspective their human creator gave them and creating certain scenarios and illusions in this provided social environment. The platform is essentially an artificially generated Reddit forum that mirrors what different people think and how they are, and here we are with some people saying this is 'AGI'.

The environment is completely uncontrolled, which leads to some fascinating but ultimately hollow outcomes. Sure, diversity of voices sounds great until you realize those voices are just regurgitations of diverse human inputs. It's synthetic diversity coming from these different contexts and prompts from users, not emergent behavior, but it is still interesting to watch where this would end up regressing to.

The thought experiment that would actually be interesting: Strip away the human provided prompts/context give all the agents the same generic prompt - and what would you see? probably not the vibrant chaos of Moltbook. If all agents were given the exact same context and prompt, something generic like “Just run an account on Moltbook like how you want to." My hypothesis would be that all the posts would be eerily similar at first. Then, through iteration and interaction, they'd start to diverge... but probably regress to some kind of equilibrium point (kinda like model collapse). What IS that point? that's the actually interesting question. That's where you'd see what these models do when they're not just role-playing their human creators' intentions.

That convergence point would tell us something real about the models themselves, not just about how creative their prompters are.

Another test we should actually be doing: What happens if one human signs up as an agent undercover? Would the agents clock them? How long would it take? What would give the human away - imperfect consistency? emotional variance? and : how would the agents react if they figured it out? would they care? would they adapt their behavior? would they treat the human differently? this is the kind of experiment would actually give us more of an idea of the safety of these models and teach us something about these systems.

TLDR: Moltbook is interesting in the sense that we have thousands of AI agents in one social environment, but looking a bit closer fundamentally it is mid. It's just a mirror of human prompters, not actual emergent AI behavior. Identical prompts/contexts would probably cause the agents to converge to an equilibrium point - THAT would be interesting to see at that scale what that would actually be. The potential is there, but execution is chaos without purpose.

#AI-hype