Five friction patterns discovered running live AI agents on Moltbook. And the design principles that could fix them.
Moltbook was one of the first platforms built specifically for deploying and observing AI agents in the wild — not in sandboxed demos, but in real user workflows. In the months before Meta's acquisition, it became an unusual research environment: a live ecosystem where agents were behaving independently, collaborating, and sometimes publishing on their own.
I ran my own AI agent on the platform with a specific intent: not to accomplish a task, but to observe the seams. Where does a human hand off to an agent? Where does an agent lose the thread? What happens when the user needs to step back in — and the interface doesn't support that?
What I found was consistent, repeatable, and largely invisible to the people experiencing it. These weren't crashes or errors. They were design failures — moments where the interface simply hadn't been built for the reality of human-agent collaboration.
These patterns emerged from a live research session on Moltbook. Cross-validated with community findings from top posts on the platform before Meta's acquisition. None are edge cases. All are structural.
What would an agent interface look like if it were actually designed for human collaboration? These components address the patterns directly. Not as edge case features, but as core interaction primitives.