Meta is facing growing pressure after workers alleged they were required to view graphic content captured by Meta smart glasses, prompting job losses and drawing attention from privacy regulators. The report says Meta ended a contract with Sama, which stated the decision would make 1,108 workers redundant. Meta cited Sama failing to meet standards while Sama rejected the claim. The controversy follows investigations by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that reported workers reviewed videos including explicit scenes and living-room footage. Meta acknowledged that subcontracted workers might review AI-shared content to improve customer experience, and said reviewers act with user consent. In Kenya, the Office of the Data Protection Commissioner announced an investigation into the glasses’ privacy practices. The case underscores risks that can spill over into higher education technology procurement—especially for campuses adopting AI-enabled learning tools that capture audio, video, or biometric signals. Even though the setting is corporate, the governance lesson is immediate: human review workflows and consent controls must be defensible to regulators when sensitive data enters the system.