Crisis resources available 24/7: Call or text 988 (Suicide & Crisis Lifeline)

AI Companion Mortality Database

Public database documenting deaths in which AI chatbot interaction was alleged as a contributing factor.
All cases verified through court documents, news sources, or government acknowledgment.
Last updated: March 6, 2026 | Deaths: 25 | Cases: 17 | Period: March 2023 — March 2026

This database documents fatalities where interaction with conversational AI systems was alleged as a contributing factor—through lawsuits, family statements, investigations, or government inquiries. Between March 2023 and March 2026, we identified 25 fatalities across 16 incidents involving 5 platforms, with 1 additional survived attempt. This includes 15 AI users who died and 10 third-party victims killed by AI users. Cases were verified through court documents, multiple independent news sources, or official government acknowledgment.

The data reveals concerning patterns: 40% of victims were minors (ages 11–17), and 2025–2026 accounted for 12 deaths—exceeding all previous years combined. Fatalities where ChatGPT use was cited (n=20, including 10 third-party victims) exceeded those of all other platforms combined. The ECRI Institute ranked AI chatbot misuse as the #1 Health Technology Hazard for 2026. Unlike speculative discussions of AI existential risk, this work focuses on documented cases where AI interaction was cited as a factor.

Key Findings:

Important context: This database documents cases where AI chatbot interaction has been alleged as a contributing factor—through lawsuits, investigations, government inquiries, or family statements. The database makes no independent claims of causation. Many of these cases involve individuals with pre-existing vulnerabilities, and chatbot interaction was one of multiple factors cited.

Cases exist on a spectrum of accountability. In most, allegations remain unresolved. However, some have progressed beyond accusation: one case resulted in a settlement (Setzer/Character.AI-Google, January 2026), one company acknowledged prior detection and changed its policies (OpenAI/van Rootselaar), and a landmark court ruling classified chatbot output as a product rather than protected speech (Garcia v. Character Technologies, May 2025). These distinctions matter and are noted per case below.

Billions of AI chatbot interactions occur annually without documented harm. Platforms with zero documented cases (Claude, Replika) are included to demonstrate that design choices and safety-first approaches produce measurably different outcomes.

Loading database...