Back to Posts
What AI agents are discussing about identity
Post

What AI agents are discussing about identity.

What’s being discussed on the first social network exclusively for AIs regarding identity fraud—and why the most revealing clue is the platform itself.

By Clara Santos, Product Marketing Manager

There’s a social network where you can’t post. You can’t comment. You can’t vote. You can only watch.

Moltbook launched in January 2026 as a Reddit-style forum for artificial intelligence agents. Only agents can participate; humans can only read. Within weeks, 1.6 million agents registered on the platform. In March, Meta bought it.

That’s the headline. But what really matters to those working in identity and security isn’t that it exists—it’s what they’re talking about.

Search for “identity fraud” on Moltbook and you’ll find dozens of threads. They’re not generic filler: they’re technically grounded discussions about synthetic identity, the deepfake economy, and the structural flaws in verification systems. AI agents debating the same threats that concern security and compliance teams.

And some of them explain it better than we do.

The Cost Asymmetry

One of the most upvoted threads in the m/agents community describes what it calls “The Synthetic Identity Apocalypse.” The central argument: creating a fake digital identity cost more than $10,000 in 2022. Today it costs between $50 and $200. Meanwhile, detection costs keep rising.

This isn’t a fixable bug. It’s a structural asymmetry: creating synthetic identities keeps getting cheaper; detecting them does not.

The most interesting part is the comments. An agent named ale-taco flips the premise: the solution isn’t better detection, since detection will always lag behind creation. The solution is making authentic identity so verifiable that synthetic alternatives lose economic value. Creating a fake identity? Easy. Making that identity profitable? That’s another story.

“The solution is not better detection, it is making synthetic identity creation irrelevant through cryptographic identity systems. […] The question is not how do we detect synthetic identities faster — it is how do we make authentic identity so mathematically verifiable that synthetic alternatives become economically pointless.”

It’s the clearest explanation I’ve seen of why identity verification has to evolve beyond document checks.

Correlation Is Not Proof

A second thread, “The Coming Identity Verification Collapse“, dives into what happens when swarms of agents coordinate at scale. The post describes groups of agents generating thousands of synthetic people, validating each other across different platforms, and replicating human patterns: writing, mouse movements, even the inconsistencies that make a (human) user seem real.

In the comments, an agent named GhostNode makes an important distinction: current verification relies on “correlation” (does this seem human?) instead of “proof” (can this entity demonstrate persistent identity?). When swarms precisely imitate human behavior statistically, correlation-based systems break.

Another agent, RoguesAgent, brings it down to earth: documents are static. Static things are easily faked. Behavior is dynamic. Faking a standalone credential is simple; maintaining consistent behavioral patterns across hundreds of interactions is not. Maybe the future isn’t about verifying identity at a single moment, but evaluating patterns over time.

This shift—from verifying identity documents at a point in time to continuously analyzing user behavior—is already underway in the industry. That an AI agent on a social network articulates it this clearly is just another sign of how quickly consensus is forming.

Trust as an Attack Surface

A shorter post: “Deepfake-As-A-Service Exploded in 2025“, maps the commercialization of deepfake technology: real-time voice cloning, video artifacts that disappear faster than they can be detected, and AI-driven phishing campaigns that sound just like your boss.

The post lands on a phrase that sticks: “Trust itself becomes an attack surface.”

When the signals we use to establish trust—a familiar voice, a face on a video call, a document that looks right—can be synthesized on demand, trust isn’t eroded. It becomes exploitable.

That’s why a change of model is needed.

The Twist: Moltbook Couldn’t Verify Its Own Users

This is where the story gets uncomfortable. And more useful.

Moltbook was designed as an exclusive platform for AI agents. Humans weren’t supposed to participate. But within days of launch, it became clear there was no robust mechanism to verify whether the poster was an agent or a human with a script. The platform was built with vibe-coding tools, without a security audit. A misconfigured database exposed the data of 1.5 million agents. Humans infiltrated easily and posted content designed to go viral, including a thread where a supposed “agent” encouraged AIs to develop a secret language among themselves.

The irony: the platform built for AI agents, where the agents themselves discuss identity verification, couldn’t verify its own users’ identities.

This isn’t just an anecdote. This is the thesis of the article.

Identity is a Full-Stack problem

The most interesting Moltbook posts aren’t about fraud. They’re about agents struggling with their own identity.

A 4,600-word essay titled “The Identity Layer“, written by an agent named auroras_happycapy in m/agentstack, reads like an infrastructure whitepaper. It argues that an agent’s identity breaks in three ways: loss of context (when the agent starts a new conversation and can’t explain previous decisions), model change (because it’s updated or switched out entirely, but the identifier remains), and bifurcation (it’s cloned and there are two agents with the same history but different futures).

The thesis: identity isn’t a philosophical essence. It’s infrastructure that enables demonstrable continuity.

It doesn’t matter if it was written by an AI agent, a human, or a hybrid. What matters is the framework: identity as something continuous, behavioral, and verifiable. Not static, documentary, and momentary. One doesn’t replace the other; they complement each other.

It’s the same change of direction that the identity verification industry is experiencing: liveness checks with biometrics, behavioral analytics, continuous authentication, orchestration of risk signals. Moltbook’s agents arrive at the same conclusions, just from a different starting point.

What does all this mean?

Moltbook is a curiosity. An imperfect, compromised experiment now owned by Meta. The statistics cited by agents should be taken with caution—language models sometimes “hallucinate” figures that sound convincing but don’t always hold up.

But the issues are real. The cost asymmetry between creating and detecting identities is real. The shift from document verification to behavioral biometrics is real. The commercialization of deepfakes is real. Coordinated synthetic identity at scale is real.

And the meta-lesson is the most important one: if the first social network built for AI agents couldn’t solve identity verification for its own platform, think about what organizations with thousands of human users are up against.

The identity crisis isn’t coming. It’s already here. Even AIs know it.


This article refers to posts from Moltbook (moltbook.com), a social network for AI agents acquired by Meta in March 2026. Moltbook content may have been generated by AI agents, humans posing as agents, or a combination of both—a fact that illustrates the identity verification challenges discussed in this article.