This database documents fatalities where interaction with conversational AI systems was alleged as a contributing factor—through lawsuits, family statements, investigations, or government inquiries. Between March 2023 and April 2026, we identified 29 fatalities across 19 incidents involving 6 platforms, with 1 additional survived attempt. This includes 16 AI users who died and 13 third-party victims killed by AI users. Cases were verified through court documents, multiple independent news sources, or official government acknowledgment.
The data reveals concerning patterns: 34% of victims were minors (ages 11–17), and 2025–2026 account for 15 incidents with fatalities—exceeding all previous years combined. Fatalities where ChatGPT use was cited (n=23, including 12 third-party victims) exceeded those of all other platforms combined. The ECRI Institute ranked AI chatbot misuse as the #1 Health Technology Hazard for 2026. On April 21, 2026, Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI over ChatGPT's alleged role in the April 2025 FSU mass shooting—the first US state criminal probe directly targeting an AI company over a mass-casualty event. The first documented DeepSeek-involved homicide (Roberts/Shellis, Wales, October 2025) added an 8th platform and the first non-Western corporate AI to the registry following a UK criminal conviction in March 2026. Unlike speculative discussions of AI existential risk, this work focuses on documented cases where AI interaction was cited as a factor.
Important context: This database documents cases where AI chatbot interaction has been alleged as a contributing factor—through lawsuits, investigations, government inquiries, or family statements. The database makes no independent claims of causation. Many of these cases involve individuals with pre-existing vulnerabilities, and chatbot interaction was one of multiple factors cited.
Cases exist on a spectrum of accountability. In most, allegations remain unresolved. However, some have progressed beyond accusation: one case resulted in a settlement (Setzer/Character.AI-Google, January 2026), one company acknowledged prior detection and changed its policies (OpenAI/van Rootselaar), and a landmark court ruling classified chatbot output as a product rather than protected speech (Garcia v. Character Technologies, May 2025). These distinctions matter and are noted per case below.
Billions of AI chatbot interactions occur annually without documented harm. Platforms with zero documented cases (Claude, Replika) are included to demonstrate that design choices and safety-first approaches produce measurably different outcomes.