OpenAI appointed Carnegie Mellon professor Zico Kolter to lead a four‑person Safety and Security Committee with formal authority to halt the company’s release of new systems if they are judged unsafe. Regulators in California and Delaware made Kolter’s oversight a condition in agreements allowing OpenAI to adopt a new corporate structure and raise external capital. Kolter will sit on the nonprofit board that retains ultimate governance over safety decisions, separate from the for‑profit operating entity. He warned that the panel crosses a broad range of risks beyond abstract 'existential' concerns, including security, public‑health and mental‑health implications. The arrangement formalizes a model where university researchers are central actors in corporate safety governance, raising questions about academic independence, conflict management and the role of faculty expertise in public‑facing technology deployments.
Get the Daily Brief