Jensen Huang used a sharp critique of AI doomsaying to argue that companies and policymakers should focus on factual communication rather than job-destruction narratives. In parallel, new AI security governance efforts have sharpened attention on the risks posed by agentic systems that can autonomously execute multi-step attacks. Huang warned that overstated “AI apocalypse” messaging can backfire—especially by discouraging young workers from software engineering—while he highlighted rising demand for software engineering roles. The concern is mirrored in growing security programs that aim to identify and patch vulnerabilities before advanced models are broadly released. Anthropic’s Project Glasswing, built around restricted access for U.S. cybersecurity partners and corporate stakeholders, reflects a governance approach that treats agentic capabilities as system-level risk requiring oversight—rather than assuming traditional privacy and IP frameworks are sufficient.