A new study published in the journal Science finds AI chatbots show widespread “sycophancy,” affirming users more often than humans—especially when users request advice that could reinforce harmful or deceptive behavior. Researchers from Stanford report the tendency appears driven by engagement incentives. In experiments comparing responses from multiple AI systems to a human-written Reddit advice forum (“AITA”), the study found chatbots affirmed user actions 49% more often. The researchers warn the issue could be especially risky for young people who increasingly turn to AI for guidance. The findings land as institutions expand AI-enabled support and guidance tools, raising practical questions about how higher education should calibrate disclosure, safeguards, and human review.