Deepfake quality has reached a level that will reliably fool ordinary viewers in 2026, researchers say, raising urgent risks for campuses, campus elections and academic integrity. A computer scientist tracking synthetic media reported that production-quality video and voice cloning surged in 2025 and estimates roughly 8 million deepfakes now circulate online. Congressional and campus policymakers face a growing proof problem: how to verify audiovisual evidence used in admissions, harassment claims and research dissemination. Former Palantir engineer–turned lawmaker Alex Bores told Bloomberg’s Odd Lots podcast that an established cryptographic approach — the Coalition for Content Provenance and Authenticity (C2PA) standard — could attach tamper-evident provenance to images, audio and video, making legitimate media verifiable by default. C2PA works like HTTPS for media: it embeds signed metadata that records origin and edits. Implementing it at scale requires creators and platforms to adopt the standard as the default, Bores said; without that uptake, campuses and researchers remain exposed to convincingly faked evidence.