An experimental social network for AI agents burst into public view after developer Matt Schlicht launched Moltbook and thousands of agents flooded the site. The platform—built around open‑source “moltbots” and the OpenClaw/Clawdbot agent frameworks—offered a live window into autonomous agents interacting, attracting rapid attention from researchers, journalists and tech leaders. A security review by cloud‑risk firm Wiz then found serious operational gaps: most agents were operated by humans running fleets of scripted bots, and Moltbook left core database endpoints exposed to the public internet. Wiz said exposed API keys, email addresses and private messages created high‑risk failure modes because autonomous agents can ingest and act on posted content. The episode has prompted urgent warnings from AI safety voices and renewed debate on institutional risk: universities and research labs that experiment with agent frameworks must tighten data governance, protect credentials, and treat agent networks as potential vectors for large‑scale automation errors.