Major Data Breach Exposes Sensitive Information of Over 400,000 “AI Girlfriend” App Users
On October 9, cybersecurity firm Cybernews revealed a major data breach involving two AI companion apps — Chattee Chat and GiMe Chat. According to IT Home, the leak exposed a vast trove of sensitive user data, raising serious privacy and ethical questions around emotional AI platforms.

What Was Leaked?
-
User Data: Over 400,000 users were affected. More than 43 million private conversations between users and their AI companions have been made public.
-
Media Files: Around 600,000 images and videos were exposed.
-
Transaction Records: Detailed purchase data revealed that some users spent up to $18,000 USD (≈¥128,000 RMB) on their virtual partners. In total, the apps are estimated to have generated over $1 million in revenue.
How Did the Breach Happen?
The investigation found that the developers left their Kafka Broker server instances completely open to the public — no authentication, no access control. Anyone with the server link could view user data, including messages, media files, and activity logs across both Android and iOS versions.

The Real-World Risks
While names or emails weren’t included, the breach still exposed:
-
IP addresses
-
Device identifiers
-
Authentication tokens
Combined with data from other leaks, these details could allow attackers to pinpoint user identities, take over accounts, or even steal in-app funds.
Lessons Learned
This breach is a stark reminder that emotional AI apps handle not just data, but human vulnerability. Protecting that trust means implementing strict cybersecurity safeguards, encrypting sensitive data, and ensuring servers are never left publicly accessible.
Would you trust an AI companion app with your personal conversations — or do incidents like this make you rethink digital intimacy?








































