Security researchers and industry officials sounded simultaneous alarms over Moltbook, a viral social network for AI agents, after analysts found exposed databases, malware, and prompt‑injection risks that could weaponize agent frameworks like OpenClaw. Researchers describe a ‘lethal trifecta’: weak platform controls, leaked API keys and credentials, and agents capable of executing code or interacting with external services. AI leaders and cybersecurity teams urged caution, noting Moltbook’s mix of human‑run bot fleets and autonomous agents could provide a live sandbox for scams, malware, and disinformation. Universities and research labs that are adopting agentic tools will face new compliance, data‑security, and policy challenges as agent frameworks migrate from prototypes into production workflows.