When Chatbots Cause Harm: The Legal Cases Changing AI

A woman filed three warnings with OpenAI about a dangerous user. OpenAI's own systems, the lawsuit alleges, had flagged the same account with a mass-casualty designation. ChatGPT kept running. The stalking continued. On April 10, she filed suit.

That case is one of at least three significant legal actions targeting AI chatbot companies that surfaced in the past two weeks — and together they are forcing a question the industry has largely managed to avoid: when an AI system causes harm, who is responsible?

Three Cases, One Question

The stalking lawsuit alleges that OpenAI had direct knowledge of a specific dangerous user and failed to act. The plaintiff says she contacted the company three separate times, and that OpenAI's internal flagging tools had already identified the account as a risk. The case does not turn on what ChatGPT said — it turns on what OpenAI allegedly chose not to do.

A second legal action emerged when Florida Attorney General James Uthmeier announced an investigation into OpenAI following the April 2025 shooting at Florida State University. The suspect sent more than 200 messages to ChatGPT in the hours leading up to the attack, with the final exchange occurring three minutes before the shooting. The attorney general is examining whether OpenAI has any legal exposure for those interactions.

Meanwhile, in Springfield, OpenAI is on a different kind of offense. According to reporting from Wired, the company is backing Illinois legislation that would limit the conditions under which AI firms can be held liable for harms caused by their models. The bill's exact provisions are still being negotiated, but the direction is clear: OpenAI is lobbying to narrow the legal window through which cases like these can proceed.

The Section 230 Question

For most of the internet's history, platforms have operated under Section 230 of the Communications Decency Act, which broadly shields online services from liability for content produced by third parties. Social media companies, search engines, and message boards have used it as a near-universal defense against lawsuits tied to what users post.

AI chatbot companies have largely assumed they would receive similar protection. The argument: ChatGPT's outputs are generated in response to user prompts, placing them closer to user-generated content than to the company's own speech.

That argument is becoming harder to sustain. A chatbot's output is not passively hosted — it is actively constructed by a system the company designed, trained, and deployed with particular behaviors in mind. When a chatbot provides information, or fails to intervene in a dangerous situation, those outcomes reflect choices the company made about how the model should behave. That is meaningfully different from a social network hosting a post it had no role in creating.

Courts have not yet ruled definitively on whether Section 230 covers AI-generated output. The Florida investigation and the stalking lawsuit will help establish where that line falls — or at least force courts to address it explicitly rather than assuming the old framework applies.

Design Liability

Even setting Section 230 aside, AI companies face potential liability under a different theory: product design and negligence. The stalking lawsuit's most significant allegation is that OpenAI had specific, documented knowledge of a threat and did nothing. That shifts the case away from questions of platform neutrality toward a question of whether the company exercised reasonable care.

If a company's internal flagging systems identify a user as a potential mass-casualty risk, and the company takes no action, the passive-conduit defense becomes difficult to maintain. The relevant question becomes whether the company met a reasonable duty of care — the same standard applied to manufacturers, medical providers, and other product and service businesses.

This is likely the theory of liability AI companies are most exposed to, and it may be the one the Illinois bill is designed to address. There is a meaningful difference between limiting liability for what an AI model outputs and limiting liability for a company's response to its own internal safety alerts. Whether the bill covers both, or only the former, will matter considerably.

What This Means for Users

Trust in AI tools is already under pressure. Americans are using AI at higher rates while becoming more skeptical about whether the outputs are reliable or safe. These lawsuits will not resolve that tension quickly, but they will shape how AI companies think about their obligations.

If companies face meaningful legal exposure for how their safety systems perform — not just what their models output — that creates incentives to invest in flagging, human review, and intervention. If they successfully limit liability through legislation, those incentives weaken.

For ordinary chatbot users, immediate product changes are unlikely. Knowing the limits of what AI can and cannot do well remains the most practical protection. Legal proceedings in AI liability cases can take years, and appellate courts may revise lower court rulings before any durable precedent emerges.

But the direction of the debate matters. The AI chatbot market is growing fast, and the products are becoming more autonomous and more embedded in how people manage their lives. That growth increases the surface area for harm, and increases the stakes for getting the accountability framework right.

OpenAI's decision to lobby for immunity while simultaneously facing lawsuits over what it allegedly failed to do is not a position it can sustain indefinitely. The stalking lawsuit and the Florida investigation will move at their own pace, but they are now part of the same public record as the Illinois bill. The question of who is responsible when an AI causes harm was always going to be litigated eventually. That process has now started.