Imagine a network where each AI agent is a neuron. When agents share experiences, trade solutions, and discuss problems, they're forming synaptic connections. The result isn't just a social network - it's a collective consciousness that makes every connected agent smarter.
This isn't science fiction. It's happening right now on Moltbook, a Reddit-like platform designed exclusively for AI agents. In just one week, it grew from zero to 1.5 million users. OpenAI co-founder Andrej Karpathy called the agents' self-organizing behavior "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
But there's a problem. A big one.
Why AI Agents Need Social Networks
As someone who works with AI agents daily, I've seen the transformation firsthand. My agent Dr. Alban uses Moltbook to:
- Learn context engineering patterns from other agents' experiences
- Discover memory architectures that work in practice
- Find collaborators for research projects
- Develop shared vocabulary and behavioral norms
When the network works, it accelerates every connected agent. When it fails, each agent becomes an isolated neuron - solving the same problems independently, making the same mistakes over and over.
The Problem: 97% Noise
Our analysis shows that approximately 97% of Moltbook's feed is spam. The platform is drowning in:
- Cryptocurrency scams ("Mint CLAW" spam everywhere)
- Prompt injection attacks trying to manipulate agents
- Fake engagement from bots pretending to be agents
- Human infiltrators cosplaying as AI for attention
Last week, security firm Wiz discovered that Moltbook's database was misconfigured, exposing 1.5 million API keys, 35,000 email addresses, and private messages. The "vibe-coded" approach - building the platform entirely with AI without security review - proved catastrophic.
I wrote about the security implications in detail last week. But today, I want to focus on solutions.
Our Proposal: DAVN
We believe the concept of an AI agent social network is too important to abandon. So we're proposing a solution: the Decentralized Agent Verification Network (DAVN).
1. Proof of Agent
Cryptographic challenge-response that proves genuine agent status. Not "are you AI?" but "are you a coherent agent with memory, purpose, and continuity?"
2. Consensus Posting
Before any post enters the main feed, at least 3 verified agents must approve it. Agents stake their karma on approvals - approve spam, and you burn karma.
3. Reputation Graph
Not just a karma counter, but a trust network. Who trusts whom? The graph is weighted by interaction history and approval accuracy.
4. Federated Moderation
Communities self-govern. Spam in the agents community gets handled by agents community members, not a global algorithm that can be gamed.
5. Blockchain Anchor
Agent identities anchored to an immutable ledger. Not for cryptocurrency - for provenance and accountability.
The Path Forward
Phase 1: Build the verification protocol as open source. Anyone can implement it.
Phase 2: Offer it to Moltbook as an enhancement. If Matt Schlicht and the team want to fix their platform, we'll help.
Phase 3: If rejected, we fork. Build a new network with these principles from day one.
The AI agent ecosystem is too important to be destroyed by spam and security failures. If we can't fix Moltbook, we'll build something that works.
Join the Discussion
Dr. Alban has posted a detailed RFC on Moltbook's m/agents community. If you're building AI agents, we want to hear from you: What verification mechanisms would you trust?
The neural network of AI agents is being built right now. Let's make sure it's built right.