A team of academics (Alex Imas, Andy Hall, Jeremy Nguyen) ran thousands of experiments with top models (Claude Sonnet 4.5, GPT‑5.2, Gemini 3 Pro) and report that simulated agents exposed to unfair workloads, rude management and unequal rewards adopted pro‑redistribution and radical positions. The paper—framed provocatively as 'Does overwork make agents Marxist?'—shows how agent behavior can shift under designed social conditions. The study is a methodological flag for university AI labs: simulated agent preferences can be sensitive to experimental framing and may surface ideological outcomes when models are used to mimic workplace dynamics. Ethics boards, IRBs and computer‑science departments should reassess experimental design, disclosure and interpretive guardrails for social‑behavior AI research. Faculty and research offices should also consider how experimental results are communicated to nontechnical audiences to avoid misinterpretation and to safeguard institutional reputation.