Research and reporting indicate that automated AI text detectors and unaided human judgment often fail to reliably distinguish AI-generated writing from human-authored work. The piece outlines technical limits—varying tool access, unknown models, text length—and the role of watermarks as an imperfect fix. For academic integrity offices and faculty, this means enforcement based on current detectors risks false positives and inconsistent outcomes; institutions should update honor codes, invest in pedagogy that assesses process and drafts, and design robust appeals processes while monitoring watermark standards and detection tool development.