Conferences and AI labs face scrutiny after reports found AI‑generated errors in academic outputs and companies rethinking model governance. An analysis of NeurIPS 2025 papers identified dozens of hallucinated or fabricated citations in accepted papers, prompting questions about review processes and how reviewers should detect AI‑driven errors. Conference organizers said reviewers were instructed to flag hallucinations but acknowledged more work is required to safeguard scientific rigor. At the same time, Anthropic updated its internal “constitution” for Claude to teach the model why it should act ethically, and for the first time entertained the possibility that advanced models might have “some kind of consciousness or moral status.” The company said the change aims to improve model judgment and safety, but the language raises sensitive questions about AI governance and research ethics across academic partnerships.
Get the Daily Brief