Meredith Whittaker, president of Signal, warned that AI agents — software that performs tasks by accessing a user’s data — pose an 'existential' threat to secure messaging. She cautioned that agent architectures create new attack surfaces and could undermine end‑to‑end encryption used by journalists and campus researchers. Signal’s critique targets operating‑system level integrations that would allow agents to read and act on messages, credentials and sensitive content. The company argues such access nullifies app‑level privacy guarantees and invites prompt‑injection and data‑exfiltration attacks. University CISOs, research offices and ethics boards should reassess policies governing AI agents, platform integrations, and data governance for sensitive research. Clarification: an AI agent is a software assistant that automates tasks across apps and services on behalf of a user.