Two developments this week put fresh scrutiny on research quality and the global scholarly landscape. An analysis of reviews submitted to the International Conference on Learning Representations found roughly 21% of peer reviews appeared fully generated by large language models, prompting ICLR leaders to audit reviews and heighten concerns about peer‑review integrity in AI fields. Separately, a new report argues China’s research output and impact have reached parity with the U.S., a finding that dovetails with recent U.S. research‑security measures. Conference organizers and research‑security officials warn that both the scale of submissions and geopolitical competition are forcing journals, conferences and universities to reassess vetting, attribution and data‑security policies.
Get the Daily Brief