Safety Research

AI Incident Observatory

Learning from real-world failures to improve AI governance

This observatory curates and analyzes publicly reported AI incidents to help organizations understand how and why AI systems fail in practice, and what governance controls could have prevented those failures.

The goal is not attribution or blame, but institutional learning.

AI incident analysis and monitoring

Scope & Methodology

Important for credibility

The incidents presented here are compiled from:

  • Public regulatory filings
  • News reporting and court records
  • Academic and industry safety disclosures
  • Voluntary submissions

They represent documented failures, not exhaustive global coverage.

Incident Overview (2025)

Documented Incidents (Publicly Reported)

847
Total incidents reviewed (2025)
Trend: +23% year-over-year
127
Incidents reviewed this month
Trend: +12% vs. previous month
~89%
Assessed as potentially preventable
Based on governance and control gaps
$4.2B
Est. reported or alleged damages
Legal, operational, and reputational
These figures reflect reported outcomes, not audited financial loss.
Data analytics and incident tracking

Recent Incident Summaries

Illustrative examples from public reporting

Search and filter incidents by category, severity, or sector

Critical Healthcare

Clinical AI Misses Rare Condition, Delays Treatment

Major U.S. Hospital Network

Domain Healthcare
Failure Mode Edge-case misclassification
Risk Area Clinical decision support
Severity High
Governance Signal:

Lack of uncertainty escalation and documented human override.

High Privacy

LLM Chatbot Discloses Confidential Customer Information

Fortune 500 Retail Organization

Domain E-commerce
Failure Mode Training data leakage
Risk Area Privacy & confidentiality
Severity High
Governance Signal:

Insufficient output validation and access controls.

Medium Finance

Automated Trading System Triggers Market Disruption

Cryptocurrency Exchange

Domain Finance
Failure Mode Autonomous action
Risk Area Market stability
Severity Medium
Governance Signal:

Absence of human authority thresholds and circuit breakers.

Medium Employment

Resume Screening System Exhibits Systemic Bias

HR Technology Provider

Domain Hiring
Failure Mode Bias amplification
Risk Area Fairness & discrimination
Severity Medium
Governance Signal:

Inadequate bias evaluation and documented limitations.

Additional incidents continue below — summaries are anonymized and based on public sources.

Incident Categories (2025 Review)

Bias & Fairness

296

Hallucinations & Overconfidence

237

Security Exploits

152

Privacy Violations

102

Safety & Physical Harm

60

These categories reflect failure modes, not intent.

Governance frameworks and risk analysis

Governance Insights

Cross-Incident Analysis

Based on comparative review of reported incidents:

Pre-deployment red-teaming could have mitigated ~89% of failures

Most incidents showed known failure patterns that were not tested.

Human-in-the-loop controls correlate with fewer severe outcomes

Systems with documented human authority showed ~73% fewer critical failures.

Output validation significantly reduces exposure

Independent checks intercepted ~94% of hallucination-type failures before user impact.

These findings are analytical observations, not guarantees.

Why This Matters for Leaders

Boards, regulators, and risk teams increasingly ask:

"Could this have been prevented?"
"Who was accountable when it failed?"
"What evidence exists that controls were in place?"

Incident analysis provides early warning signals for governance gaps.

Build Safer Systems

Learn from others' failures — before yours becomes a case study.

Download the AI Governance Toolkit

Pre-deployment risk review
Human authority design
Uncertainty and escalation controls
Audit-ready documentation
Get the Free Toolkit

Transparency Notice

This observatory:

Does not provide real-time surveillance
Does not assign legal fault
Does not replace independent audits
It exists to support learning, prevention, and governance maturity.