Every AI App Data Breach Since January 2025: 20 Incidents, Same Root Causes
Summary
Between January 2025 and February 2026, at least 20 security incidents exposed data from tens of millions of users across AI applications, revealing a systemic security crisis driven by the rush to deploy AI wrappers. Independent research by CovertLabs, Cybernews, and Escape confirmed that these breaches overwhelmingly shared the same preventable root causes: misconfigured Firebase databases, missing Supabase Row Level Security (RLS), hardcoded API keys, and exposed cloud backends. Specific incidents detailed include the exposure of 300 million chat messages from Chat & Ask AI due to permissive Firebase rules, the leak of 64 million applicant records from McDonald's McHire due to default credentials, and the exposure of children's conversations via the Bondu AI toy due to easily exploitable authentication. The article emphasizes that these are configuration errors, not complex exploits, noting that the underlying architectural flaws (like insecure Firebase defaults) were documented as early as 2019. The proposed architectural fix to eliminate this entire class of vulnerabilities is self-hosting open-source models on infrastructure the user controls, ensuring sensitive data never leaves the isolated environment.
(Source:Blog Barrack Ai)