An analysis of NeurIPS 2025 proceedings found instances of AI‑hallucinated citations—fabricated or subtly altered references—in dozens of accepted papers, raising questions about authorship practices and the use of large language models in scholarly work. The review identified made‑up titles, fake journals and invented authors slipping through peer review. NeurIPS organizers acknowledged the issue and said reviewers were instructed to flag hallucinations, but emphasized more work is needed to assess the scope and implications. The episode spotlights new integrity risks as researchers increasingly use generative AI tools to draft manuscripts and bibliography entries, prompting conferences and journals to reassess review guidelines and verification protocols.
Get the Daily Brief