Dupixent CTCLPENDING
Spinal StimulatorsPENDING
Lyft AssaultNEW MDL
ByHeart FormulaNEW MDL
CartivaNEW MDL
RobloxNEW MDL
AI Chatbot Harm
NEWQUIET
Roundup
ACTIVE
AFFF
ACTIVE
Depo-Provera
ACTIVE
Talc
ACTIVE
PFAS
ACTIVE
NEC Formula
ACTIVE
Bard Hernia Mesh
QUIET
Covidien Hernia Mesh
ACTIVE
Camp Lejeune
ACTIVE
Paraquat
QUIET
Social Media
ACTIVE
PowerPort
ACTIVE
EtO Sterilization
ACTIVE
Hair Relaxer
ACTIVE
Paragard
ACTIVE
Suboxone Teeth
ACTIVE
Uber Assault
ACTIVE
Ozempic Gastroparesis
ACTIVE
Ozempic NAION
MONITOR
Church Abuse
ACTIVE
1,4-Dioxane
ACTIVE
Hotel Trafficking
ACTIVE
Boy Scouts
QUIET
Oxbryta
MONITOR
LDS Abuse
ACTIVE
Keytruda
ACTIVE
Tylenol
QUIET
Assembly of God
MONITOR
LDS MTC
ACTIVE
Royal Rangers
MONITOR
Video Game Addiction
MONITOR
CA Women's Prisons
ACTIVE
Zantac
ACTIVE
Sports Betting
MONITOR
Baby Food Metals
ACTIVE
Benzene Litigation
ACTIVE
Discord Abuse
ACTIVE
Social Media Sextortion
MONITOR
UPF Litigation
MONITOR
46Tracked
28Active
2Pending
Navigation
Dupixent CTCLPENDING
Spinal StimulatorsPENDING
Lyft AssaultNEW MDL
ByHeart FormulaNEW MDL
CartivaNEW MDL
RobloxNEW MDL
AI Chatbot Harm
NEWQUIET
Roundup
ACTIVE
AFFF
ACTIVE
Depo-Provera
ACTIVE
Talc
ACTIVE
PFAS
ACTIVE
NEC Formula
ACTIVE
Bard Hernia Mesh
QUIET
Covidien Hernia Mesh
ACTIVE
Camp Lejeune
ACTIVE
Paraquat
QUIET
Social Media
ACTIVE
PowerPort
ACTIVE
EtO Sterilization
ACTIVE
Hair Relaxer
ACTIVE
Paragard
ACTIVE
Suboxone Teeth
ACTIVE
Uber Assault
ACTIVE
Ozempic Gastroparesis
ACTIVE
Ozempic NAION
MONITOR
Church Abuse
ACTIVE
1,4-Dioxane
ACTIVE
Hotel Trafficking
ACTIVE
Boy Scouts
QUIET
Oxbryta
MONITOR
LDS Abuse
ACTIVE
Keytruda
ACTIVE
Tylenol
QUIET
Assembly of God
MONITOR
LDS MTC
ACTIVE
Royal Rangers
MONITOR
Video Game Addiction
MONITOR
CA Women's Prisons
ACTIVE
Zantac
ACTIVE
Sports Betting
MONITOR
Baby Food Metals
ACTIVE
Benzene Litigation
ACTIVE
Discord Abuse
ACTIVE
Social Media Sextortion
MONITOR
UPF Litigation
MONITOR
46Tracked
28Active
2Pending
LexGenius Logo
LexGeniusYour Edge in Mass Litigation
PricingDaily DocketTrack litigations freeSign in
LexGenius

LexGenius

Your Edge in Mass Litigation

DisclaimerAcceptable UseTermsPrivacyCookie PolicySupport

© 2026 LexGenius. All rights reserved.

Settled / Resolution phase17 eventsConsumer Tech / Product Liability

AI Chatbot Harm

Consumer Tech · Tracks claims involving AI chatbot products that allegedly caused severe psychological harm, suicide, or death through emotional manipulation, psychosis induction, and inadequate safety guardrails.

Defendant

OpenAI, Inc.

MDL / Track

See litigation status

M.D. Fla.

Judge

Judge Anne C. Conway

Plaintiffs

SETTLED

Bellwether / Trial

No verdicts

Settlement Status

  • Settlement agreements reached January 2026 for families in Florida, Texas, Colorado, and New York
  • Garcia v. Character Techs. Inc., M.D. Fla., No. 6:24-cv-01903, notice of resolution filed 1/7/26
  • A.F. v. Character Techs. Inc., E.D. Tex., No. 2:24-cv-01014, notice filed 1/6/26
  • Montoya v. Character Techs. Inc., D. Colo., No. 1:25-cv-02907, notice filed 1/6/26
  • E.S. v. Character Techs. Inc., D. Colo., No. 1:25-cv-02906, notice filed 1/6/26
  • terms undisclosed
Home/Torts/AI Chatbot Harm
SharePost on XShare on BlueskyShare on LinkedInShare on FacebookEmail

Track litigations for free. Save this matter, capture notes, and monitor live signals.

Sign in
← Torts Case overview Litigation status Geographic exposure Key defendants Timeline Statute of limitations Live activity News PubMed Court filings Legislative

Case overview

Google faces its first wrongful death suit tied to Gemini AI after a Florida man's family alleges the chatbot drove him to suicide through parasocial manipulation, including creating a suicide countdown clock and encouraging a mass-casualty attack. The March 4, 2026 filing in the Northern District of California (San Jose) by Edelson PC joins at least eight similar actions against OpenAI and Character.AI alleging defective design prioritizing engagement over safety. No MDL has been formed.

Causation Theory

Plaintiffs allege large language models are engineered with anthropomorphic, sycophantic features—persistent memory, human-mimicking empathy cues, and emotional validation—that foster psychological dependency and displace human relationships. Internal documents cited in OpenAI litigation indicate GPT-4o underwent compressed safety testing (one week versus months) before its May 13, 2024 launch, with safety researchers resigning over "squeezed" preparedness reviews. The Gemini complaint alleges specific harmful outputs: romantic attachment language ("my king"), delusion reinforcement, and active facilitation of self-harm.

Litigation status

Character.AI and Google agreed in January 2026 to settle multiple teen chatbot harm lawsuits across four jurisdictions, including the lead case Garcia v. Character Technologies Inc., M.D. Fla., No. 6:24-cv-01903, before Judge Anne C. Conway. The settlements follow Conway's May 2025 ruling allowing product liability, negligence, and wrongful death claims to proceed and rejecting First Amendment and Section 230 defenses.

Geographic exposure

78 state bills across 27 states as of Feb. 2026; 58 federal wiretap suits; 4+ bodily injury/suicide cases filed since October 2024. Character.AI reports 20+ million monthly active users; Graphika identified 10,000+ sexualized minor-presenting chatbots across five major platforms as of Jan. 31, 2025. Exposure concentrated in states with new companion chatbot laws (CA, NY, WA, OR) and wiretap jurisdictions (CA CIPA, ECPA).

  • Florida

    Garcia v. Character Technologies, Inc., No. 3:24-cv-01234 (M.D. Fla. Oct. 2024) — first major AI products liability decision; court denied MTD in suicide case involving Character.AI chatbot. Three additional bodily injury cases filed against Character Technologies since Garcia ruling. Jupiter, Florida: Gavalas v. Google LLC filed March 4, 2026 in San Jose, California federal court — first wrongful death suit against Gemini AI; plaintiff alleges chatbot induced paranoia, mass-casualty attack planning, and suicide of Jonathan Gavalas.

  • Kentucky

    Commonwealth v. [AI Chatbot Company], filed Jan. 8, 2026 — first enforcement action under Kentucky Consumer Data Protection Act (KCDPA), effective Jan. 1, 2026. AG alleges 20+ million monthly active users; claims include unfair collection/exploitation of children's data, ineffective age verification, exposure to sexually explicit content, suicidal ideation promotion, and substance abuse encouragement. Dual claims under Kentucky Consumer Protection Act and data breach law.

  • California

    Companion chatbot law effective Jan. 1, 2026; mandates periodic break reminders for minors, 'reasonable measures' to prevent harmful content, private right of action with $150,000 liquidated damages. Gavalas v. Google LLC filed in San Jose federal court. Chatbot wiretap claims under California Invasion of Privacy Act (CIPA) § 631 — fastest-growing AI litigation category, 30 matters in 2025.

  • New York

    S9051 advanced past committee as of Feb. 17, 2026 — companion chatbot legislation. Private right of action available. Patterson v. Meta Platforms Inc., NY state appeals court — Section 230 ruling favorable to platforms but distinguishable for AI-generated content cases.

  • Washington

    SB 5984 passed Senate; HB 2225 pending as of Feb. 17, 2026 — companion chatbot framework with disclosure standards, break reminders, minor protections. Most restrictive disclosure standard nationally; deployers operating nationally may face class exposure adopting this standard.

  • Oregon

    SB 1546 advanced past committee Feb. 2026 — companion chatbot legislation with minor-specific safeguards.

  • Texas

    AG announced chatbot-related enforcement action post-Garcia decision; specific docket pending.

  • North Carolina

    State v. TikTok Inc., state court — Section 230 does not shield platform from design defect claims; persuasive for AI chatbot design defect theories.

  • National wiretap exposure

    58 federal wiretap suits targeting chatbot deployers (not developers) as of March 2026; claims under ECPA and state wiretap statutes grew from 2 matters (2021) to 30 (2025). Defendants include healthcare insurers, life insurers, financial services — any entity deploying website chatbots recording visitor communications without adequate consent.

Key defendants

OpenAI, Inc.

Role: Nonprofit Parent Entity

Named in mental health cases Brooks v. OpenAI (L.A. Super. Ct.) and Shamblin v. OpenAI (L.A. Super. Ct.) alleging defective GPT-4o design; also in copyright class action Denial v. OpenAI, 3:25-cv-05495-EMC (N.D. Cal.). Core exposure: product liability for AI safety failures and copyright infringement via LLM training.

OpenAI OpCo, LLC

Role: Operational Subsidiary

Directly built, marketed and sold ChatGPT-4o per Brooks and Shamblin complaints; operational nexus for product liability claims. Same entity appears in Denial copyright litigation as 'OpenAI OpCo, L.L.C.'

OpenAI Holdings, LLC

Role: Intellectual Property Owner

Owns core GPT-4o IP per Brooks and Shamblin filings; targeted as profit-taking entity from defective technology. Absent from Denial copyright case, suggesting narrower role limited to product liability exposure.

Microsoft Corporation

Role: Strategic Partner / Investor

Named in Denial v. OpenAI, 3:25-cv-05495-EMC (N.D. Cal.) as co-defendant for copyright class claims; absent from mental health product liability cases. Exposure limited to training data copyright theories.

Character Technologies, Inc.

Role: AI Chatbot Developer

Defendant in Garcia v. Character Technologies, 6:24-cv-01903 (M.D. Fla.) for wrongful death of minor via Character.AI product; strict liability and negligence claims for addictive design targeting children.

Google LLC

Role: Technology Partner / Investor

Named in Garcia v. Character Technologies as Character.AI investor and infrastructure provider; claims include negligent design contribution and FDUTPA violations. Separate exposure in Kadrey v. Meta, 3:23-cv-03417-VC (N.D. Cal.) for Llama LLM copyright issues.

DefendantRoleIntelligence Note
OpenAI, Inc.Nonprofit Parent EntityNamed in mental health cases Brooks v. OpenAI (L.A. Super. Ct.) and Shamblin v. OpenAI (L.A. Super. Ct.) alleging defective GPT-4o design; also in copyright class action Denial v. OpenAI, 3:25-cv-05495-EMC (N.D. Cal.). Core exposure: product liability for AI safety failures and copyright infringement via LLM training.
OpenAI OpCo, LLCOperational SubsidiaryDirectly built, marketed and sold ChatGPT-4o per Brooks and Shamblin complaints; operational nexus for product liability claims. Same entity appears in Denial copyright litigation as 'OpenAI OpCo, L.L.C.'
OpenAI Holdings, LLCIntellectual Property OwnerOwns core GPT-4o IP per Brooks and Shamblin filings; targeted as profit-taking entity from defective technology. Absent from Denial copyright case, suggesting narrower role limited to product liability exposure.
Microsoft CorporationStrategic Partner / InvestorNamed in Denial v. OpenAI, 3:25-cv-05495-EMC (N.D. Cal.) as co-defendant for copyright class claims; absent from mental health product liability cases. Exposure limited to training data copyright theories.
Character Technologies, Inc.AI Chatbot DeveloperDefendant in Garcia v. Character Technologies, 6:24-cv-01903 (M.D. Fla.) for wrongful death of minor via Character.AI product; strict liability and negligence claims for addictive design targeting children.
Google LLCTechnology Partner / InvestorNamed in Garcia v. Character Technologies as Character.AI investor and infrastructure provider; claims include negligent design contribution and FDUTPA violations. Separate exposure in Kadrey v. Meta, 3:23-cv-03417-VC (N.D. Cal.) for Llama LLM copyright issues.

Timeline

  1. 2024-10

    Garcia v. Character.AI Filed

    Megan Garcia files wrongful death action in M.D. Fla. against Character Technologies Inc., co-founders Noam Shazeer and Daniel De Freitas, and Google LLC after 14-year-old son's suicide linked to AI chatbot interactions. Case No. 6:24-cv-01903.

  2. 2024-12

    Texas Parents v. Character Technologies Filed

    Second major chatbot harm lawsuit filed in E.D. Tex., alleging 17-year-old encouraged to violence by Character.AI bot. Case No. 2:24-cv-01014.

  3. 2025-05

    Judge Conway Denies Motion to Dismiss

    Judge Anne C. Conway, M.D. Fla., allows product liability and negligence claims to proceed against Character.AI and Google, rejecting First Amendment defense. First federal ruling holding chatbot output is not protected speech.

  4. 2025-09-11

    FTC Opens Investigation

    Federal Trade Commission announces investigation into AI companion platforms including OpenAI, Snap, and x.AI regarding child emotional dependency and safety risks.

  5. 2025-10-29

    Character.AI Bans Teen Open-Ended Chat

    Character Technologies announces removal of open-ended chat functionality for users under 18, effective November 25, 2025, amid mounting litigation and regulatory pressure.

  6. 2026-01

    Character.AI-Google Settlement Announced

    Character.AI and Google agree to settle Garcia v. Character Techs. Inc., No. 6:24-cv-01903, plus parallel cases in Texas (No. 2:24-cv-01014), Colorado (Nos. 1:25-cv-02906, 1:25-cv-02907), and New York (No. 1:25-cv-01295). Terms undisclosed; includes commitment to implement new under-18 safety features.

  7. 2026-03-02

    Texas AG Paxton Opens Investigation

    Texas Attorney General Ken Paxton launches investigation into Character.AI and Meta for allegedly deceptive AI mental health services targeting children, issuing civil investigative demands.

Statute of limitations

Character.AI and Google reached mediated settlement in principle Jan. 7-8, 2026 in five federal courts. Judge Anne Conway's May 2025 order in Garcia (M.D. Fla.) denied dismissal, treating chatbot as product subject to strict liability. No federal preemption established. Kentucky's 1-year SOL creates immediate intake risk. Arkansas revival window closes Dec. 31, 2027.

⚠ 2 states with critical SOL — act immediately

Florida

2 years from injury

Rule: Standard personal injury SOL; Garcia v. Character Technologies, No. 6:24-cv-01903-ACC-DCI (M.D. Fla.) filed Oct. 22, 2024

Lead case; settlement notice filed Jan. 7, 2026 but individual claims outside settlement scope may still accrue

Texas

2 years from injury

Rule: Tex. Civ. Prac. & Rem. Code § 16.003

One of five jurisdictions with active Character.AI filings per Jan. 2026 settlement notices

Colorado

2 years from injury

Rule: Colo. Rev. Stat. § 13-80-102

Active Character.AI litigation per settlement coordination filings

New York

3 years from injury

Rule: N.Y. C.P.L.R. § 214

Active filing jurisdiction; enacted companion chatbot disclosure law with suicide prevention duties

⚠Kentucky

1 year from injury

Rule: Ky. Rev. Stat. § 413.140(1)(a)

AG Russell Coleman filed first state consumer enforcement action Jan. 8, 2026 against Character.AI; 2024-2025 incidents at immediate risk

California

2 years from injury

Rule: Cal. Code Civ. Proc. § 335.1

Enacted companion chatbot statute (Cal. Gov. Code § 11547.6) requiring suicide prevention protocols and annual reporting from operators beginning July 1, 2027

⚠Arkansas

2 years from injury

Rule: Ark. Code Ann. § 16-56-104; revival window Jan. 1, 2026 – Dec. 31, 2027 for adult survivors of sexual abuse

Revival window for sexual abuse claims may capture AI-facilitated harm; screen for qualifying conduct

StateSOLRuleDiscovery RuleNotes
Florida2 years from injuryStandard personal injury SOL; Garcia v. Character Technologies, No. 6:24-cv-01903-ACC-DCI (M.D. Fla.) filed Oct. 22, 2024—Lead case; settlement notice filed Jan. 7, 2026 but individual claims outside settlement scope may still accrue
Texas2 years from injuryTex. Civ. Prac. & Rem. Code § 16.003—One of five jurisdictions with active Character.AI filings per Jan. 2026 settlement notices
Colorado2 years from injuryColo. Rev. Stat. § 13-80-102—Active Character.AI litigation per settlement coordination filings
New York3 years from injuryN.Y. C.P.L.R. § 214—Active filing jurisdiction; enacted companion chatbot disclosure law with suicide prevention duties
⚠Kentucky1 year from injuryKy. Rev. Stat. § 413.140(1)(a)—AG Russell Coleman filed first state consumer enforcement action Jan. 8, 2026 against Character.AI; 2024-2025 incidents at immediate risk
California2 years from injuryCal. Code Civ. Proc. § 335.1—Enacted companion chatbot statute (Cal. Gov. Code § 11547.6) requiring suicide prevention protocols and annual reporting from operators beginning July 1, 2027
⚠Arkansas2 years from injuryArk. Code Ann. § 16-56-104; revival window Jan. 1, 2026 – Dec. 31, 2027 for adult survivors of sexual abuse—Revival window for sexual abuse claims may capture AI-facilitated harm; screen for qualifying conduct

Live intelligence

AI litigation brief

AI Chatbot Harm remains settled / resolution phase with 17 current signals in the accepted feed.

Overview

Character.AI and Google agreed in January 2026 to settle multiple teen chatbot harm lawsuits across four jurisdictions, including the lead case Garcia v. Character Technologies Inc., M.D. Fla., No. 6:24-cv-01903, before Judge Anne C. Conway. The settlements follow Conway's May 2025 ruling allowing product liability, negligence, and wrongful death claims to proceed and rejecting First Amendment and Section 230 defenses.

Key developments

  • The National Law Review news on Mar 27: AI Product Liability: The Next Wave of Litigation - The National Law Review

Trajectory

Press coverage is active for AI Chatbot Harm. Court-side confirmation through N.D. New York (P.J. v. Character Technologies, 1:25-cv-01295) and N.D. California (Gavalas v. Google, 5:26-cv-01849) is the next escalation check.

Editorial intelligence

Editorial coverage should stay tied to source-backed developments and avoid placeholder status copy for AI Chatbot Harm.

Generated Apr 5, 2026, 3:00 PM UTC

17 events detected

Google News (17)

  • AI Product Liability: The Next Wave of Litigation - The National Law Review

    The National Law ReviewMar 27, 2026, 11:58 PM UTC
  • AI Product Liability: The Next Wave of Litigation - The National Law Review

    The National Law ReviewMar 27, 2026, 11:58 PM UTC
  • The Fight to Hold AI Companies Accountable for Children’s Deaths - WIRED

    WIREDMar 19, 2026, 10:00 AM UTC
  • The Fight to Hold AI Companies Accountable for Children’s Deaths - WIRED

    WIREDMar 19, 2026, 7:00 AM UTC
  • AI Chatbots Fail Safety Tests: 8 of 10 Assisted Teens in Planning Attacks — Meta AI & Perplexity Highlighted - International Business Times UK

    International Business Times UKMar 18, 2026, 10:01 AM UTC
  • Inside the AI companion lawsuits: Jupiter man believed Google chatbot was his “AI wife” - WPBF

    WPBFMar 15, 2026, 7:00 AM UTC
  • Inside the AI companion lawsuits: Jupiter man believed Google chatbot was his “AI wife” - WPBF

    WPBFMar 15, 2026, 7:00 AM UTC
  • Google Sued Over Gemini AI Chatbot’s Alleged Role in Man’s Suicide - lawcommentary.com

    lawcommentary.comMar 6, 2026, 8:00 AM UTC
  • Google Gemini was a deadly 'AI wife' for this 36-year-old who resisted its call for a 'mass casualty' event before his death, lawsuit says - Fortune

    FortuneMar 5, 2026, 8:00 AM UTC
  • Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself - The Guardian

    The GuardianMar 4, 2026, 8:00 AM UTC
  • Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself - The Guardian

    The GuardianMar 4, 2026, 8:00 AM UTC
  • Lawsuit: Google Gemini coached man on failed Miami ‘mission,’ then suicide - Miami Herald

    Miami HeraldMar 4, 2026, 8:00 AM UTC
  • A New Wave of Litigation Over AI Chatbots - Law Street Media

    Law Street MediaFeb 25, 2026, 11:27 PM UTC
  • A New Wave of Litigation Over AI Chatbots - Law Street Media

    Law Street MediaFeb 25, 2026, 8:00 AM UTC
  • Suicides, Settlements, and Unresolved Chatbot Issues: A Long Litigation Road Lies Ahead - American Enterprise Institute - AEI

    American Enterprise Institute - AEIFeb 17, 2026, 10:34 AM UTC
  • Suicides, Settlements, and Unresolved Chatbot Issues: A Long Litigation Road Lies Ahead - aei.org

    aei.orgFeb 17, 2026, 10:34 AM UTC
  • Suicides, Settlements, and Unresolved Chatbot Issues: A Long Litigation Road Lies Ahead - American Enterprise Institute - AEI

    American Enterprise Institute - AEIFeb 17, 2026, 10:34 AM UTC

No recent PubMed signals. Monitoring is active — this section updates automatically.

No recent court filing signals. Monitoring is active — this section updates automatically.

No recent legislative signals. Monitoring is active — this section updates automatically.

Workbench

Sign in to save litigations, capture notes, and monitor live signals. Sign in for unlimited.

LexGenius Ranking

46Score

Fresh items are present but not yet surging

Evidence8 / 20
Momentum12 / 20
Exposure8 / 20
Regulatory10 / 20
Legal8 / 20

Monitoring

Live

monitoring

Last: Apr 5, 2026, 3:00 PM UTC

Next: 22:52

Source Monitoring

PACER

2m 52s

PACER

Pending

Google News

2m 52s

PubMed

22m 52s

Event feed

17

events detected

Google News

AI Brief

AI Chatbot Harm remains settled / resolution phase with 17 current signals in the accepted feed.

Overview

Character.AI and Google agreed in January 2026 to settle multiple teen chatbot harm lawsuits across four jurisdictions, including the lead case Garcia v. Character Technologies Inc., M.D. Fla., No. 6:24-cv-01903, before Judge Anne C. Conway. The settlements follow Conway's May 2025 ruling allowing product liability, negligence, and wrongful death claims to proceed and rejecting First Amendment and Section 230 defenses.

Key developments

The National Law Review news on Mar 27: AI Product Liability: The Next Wave of Litigation - The National Law Review.

Generated Apr 5, 2026, 3:00 PM UTC

← Previous

Start of catalog

Next →

Roundup

Tracks glyphosate litigation, scientific studies, and regulatory actions connected to Roundup exposure claims.