Millions of private AI chats were just leaked to the public

Why your private AI conversations may no longer be private

Millions of AI chats exposed—your private conversations may not be as secure as you think. | ©Image Credit: Mohamed Nohassi / Unsplash
Millions of AI chats exposed—your private conversations may not be as secure as you think. | ©Image Credit: Mohamed Nohassi / Unsplash

Millions of people have poured their secrets, questions, and unfiltered thoughts into AI chat apps, often assuming those conversations vanish the moment the chat window closes. But a newly uncovered data leak suggests otherwise. From confidential work projects to deeply personal admissions, the scale of this exposure is a wake-up call for anyone who has ever trusted a chatbot with their data. Read on to discover how this breach happened, whether your history is part of the leak, and the vital steps you must take before your private thoughts become the internet’s next headline.

Massive data breach hits Chat & Ask AI

An independent security researcher named Harry discovered a catastrophic data breach within Chat & Ask AI, a leading mobile application that has amassed over 50 million downloads on the Google Play and Apple App Store. The vulnerability, rooted in an unsecured database, allowed the researcher to access roughly 300 million messages belonging to more than 25 million users. The leaked logs reportedly contain highly sensitive content, including discussions of illegal acts and desperate pleas for mental health assistance.

Behind its sleek interface, Chat & Ask AI operates as a “wrapper” service, a popular category of apps that provide a mobile-friendly gateway to high-end Large Language Models (LLMs). This specific app allows users to toggle between industry leaders like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. The app is owned by Codeway (also operating under the name Deep Flow Software Services), an Istanbul-based tech powerhouse founded in 2020. Codeway is a dominant player in the mobile market, boasting a portfolio of over 60 products, including the viral Wonder AI Art Generator and Cleanup, which have collectively reached over 400 million users worldwide.

The breach is particularly alarming because the exposed files contained not just chat histories and model preferences, but also data linked to users of other apps within the Codeway ecosystem. This incident highlights a growing danger: while the core AI models from Google or OpenAI may be secure, the third-party apps we use to access them can often be the “back door” through which our most private thoughts are leaked to the public.

How a simple Firebase flaw exposed millions of private data

The root cause of this massive exposure wasn’t a sophisticated cyberattack, but a surprisingly common development oversight: a Firebase misconfiguration. Firebase, a Google-owned “Backend-as-a-Service” (BaaS) platform, is a staple for developers looking to scale apps quickly. However, it remains a frequent source of preventable disasters when security protocols are ignored.

The vulnerability stems from Firebase Security Rules being left in a “public” state. In this wide-open configuration, anyone who discovers the project’s URL can view, change, or even wipe the entire database without needing a single password or token. Essentially, the digital front door was left unlocked and standing wide open.

Alarmed by how often this mistake occurs, Harry developed an automated tool to scan apps on both Google Play and the Apple App Store for this exact vulnerability. The results were eye-opening: of the 200 iOS apps analyzed, more than half (103 apps) were found to be affected, collectively exposing tens of millions of files.

To pressure companies into action, Harry launched a public website listing apps currently suffering from this exposure. While Codeway’s suite of apps was initially featured, they have since been removed. Following a “responsible disclosure” (a practice where researchers alert companies privately before going public), Codeway reportedly patched the flaw across its entire ecosystem within hours.

Smart ways to protect your privacy when using AI chatbots

While security researchers work to patch the leaks, the responsibility for protecting your digital footprint ultimately sits with you. Beyond consulting Harry’s Firehound registry to see if your favorite apps are listed, you can adopt a “zero-trust” mindset when interacting with artificial intelligence.

Here is how to build a digital firewall around your private conversations:

1. Choose privacy-first platforms

Seek out AI services that explicitly offer “incognito” modes or guarantee that your inputs are not harvested to train future models. If a service doesn’t clearly state how your data is used, assume it’s being stored and analyzed.

2. Guard your true identity

When diving into sensitive topics, keep it anonymous. Never use your real name, address, or employer details in a prompt. Think of the chatbot as a stranger on a bus. You can talk, but they don’t need to see your ID.

3. Sanitize your inputs

Treat every message as if it might one day be public.

  • No sensitive documents: Avoid uploading PDFs or images containing financial data or health records.
  • Anonymize others: Use pseudonyms when discussing friends, family, or colleagues.

4. Decouple your social media

Big Tech AI (like Meta AI, Grok, or Gemini) often tries to bridge the gap between your prompts and your social profile. To prevent your deepest queries from being tethered to your public persona, log out of your social media accounts or use a dedicated, “clean” browser window for AI interactions.

5. Practice ‘decision skepticism’

Remember: AI is a statistical engine, not a soul. It lacks the empathy and lived experience required for major life choices. Relying on a chatbot for medical, legal, or crisis advice isn’t just a privacy risk. It’s a safety risk.

Source: Malwarebytes