Researchers at the University of Pennsylvania tested an AI tutor approach that emphasizes personalized practice sequencing rather than fixed problem sets. In a study involving close to 800 Taiwanese high school students learning Python, the tutoring system did not provide answers but varied how practice difficulty was assigned. Half the students received a fixed progression from easy to hard, while the other half received an adaptive sequence that adjusted difficulty based on student performance and interactions with the chatbot. The personalized group performed better on a final exam than the fixed-sequence group. The study’s reported exam improvement was characterized as equivalent to 6 to 9 months of additional schooling, though the reporting also notes the paper is not yet published in a peer-reviewed journal. Researchers framed the mechanism as keeping students within a “zone of proximal development” so tasks remain challenging but not overwhelming. The results may influence how higher education and after-school programs pilot AI tutoring—shifting attention from “explanations” to calibrated practice and learning pathways.