Illinois lawmakers are debating two competing approaches to AI catastrophe liability, with OpenAI and Anthropic backing different bills as their long-running safety-and-regulation standoff continues. OpenAI is supporting SB 3444, which would give frontier developers limited liability for causing death or serious injury to 100 or more people, or more than $1 billion in property damage—while the protection would also cover certain weapon-enabling harms. Anthropic opposes SB 3444, arguing it would act as a “get-out-of-jail-free card” rather than create enforceable accountability. Anthropic’s preferred route is SB 3261, which would require frontier AI developers to publish a public safety and child protection plan and establish incident reporting for catastrophic risk—defined as incidents causing death or serious injury to 50 or more people. Experts quoted in the coverage said SB 3444 is unlikely to pass because enforcement mechanisms and accountability measures appear weaker. The legislative fight matters for higher education and research labs that may partner with frontier model developers, because these state frameworks could shape procurement policies, compliance expectations, and risk controls.