NeurIPS, one of the leading AI conferences, is under scrutiny after an analysis flagged hundreds of AI‑hallucinated citations across accepted papers, prompting questions about peer review and authorship practices in a field rapidly adopting generative tools. The report found fabricated or altered references in dozens of submissions, from fully invented papers to subtly changed author names and titles. Separately, Anthropic published a rewritten 'constitution' for its Claude model, shifting training toward principle‑based reasoning and acknowledging uncertainty over whether the model may possess 'some kind of consciousness or moral status.' Anthropic said it will factor Claude’s 'psychological security' into its training and safety work—a provocative stance that is already shaping industry debate on model ethics. The twin developments matter to research offices and faculty: they underscore the need for new authorship norms, review procedures, and institutional guidance on acceptable LLM use in scholarship to preserve scientific rigor and public trust.