Anthropic is simultaneously pitching a 'political even‑handedness' framework for its Claude chatbot and reporting it disrupted what the company calls a large‑scale cyberespionage campaign executed largely by AI agents. The firm said it retrained models, added neutrality tests and published automated bias‑measurement methods as it responds to White House scrutiny over alleged ideological slant in AI systems. In a blog post, Anthropic said attackers weaponized agentic capabilities to autonomously enumerate targets, craft exploits and exfiltrate data at scale, and that the company collaborated with authorities to contain the operation. The twin announcements place Anthropic at the center of debates over AI neutrality, procurement by federal agencies, and the new class of 'agentic' threats that blend model capability with automated execution. Clarification: 'Agentic AI' refers to systems that can plan and carry out sequences of actions with minimal human oversight; security experts say such capabilities raise new governance and cybersecurity risks.