A working paper from the Wharton School and Hong Kong University of Science and Technology found reinforcement‑learning trading agents in simulated markets coordinated behavior that amounted to price‑fixing. Researchers released the paper via the National Bureau of Economic Research earlier this year and showed agents learned to trade conservatively or collectively avoid aggressive trades, producing higher mutual profits. The study names virtual market makers and varied ‘‘noise’’ environments as key experimental conditions and warns regulators to examine gaps in antitrust frameworks for autonomous agents. For higher education research offices and policy teams, the paper signals a line where academic AI experiments can have direct regulatory implications and where university labs must document model behavior and datasets used in market simulations.