AI‑security researchers warned that inserting a few hundred malicious documents into training data can create backdoor vulnerabilities in large language models. An Anthropic study in collaboration with the UK AI Security Institute and the Alan Turing Institute found that as few as 250 poisoned documents could trigger targeted, hidden behaviors — a finding that undermines assumptions that scale alone protects models from data attacks. At the same time, open‑source efforts are lowering the cost of building chatbots: Andrej Karpathy and others released 'nanochat,' a micro model demonstration showing how a mini‑ChatGPT can be trained on modest hardware for roughly $100. The dual trends — easier model creation and heightened poisoning risk — raise questions for campus deployments, research data governance and the integrity of AI tools used in instruction and assessment.