Research Disclosure
Threat model diagram: a user's query flows into an LLM platform, which leaks data to Meta, Google, TikTok, and other trackers via embedded scripts

Your AI Assistant Is Leaking Your Conversations

We disclose structural privacy risks in prominent generative AI products — Perplexity, Anthropic's Claude, xAI's Grok, and OpenAI's ChatGPT — caused by third-party trackers embedded in LLM services that leak user conversations, identities, and sensitive metadata.

4 AI Platforms Tested
13+ Third-party Trackers Found
4 Platforms Affected
0 Disclosed to Users

Generative AI is rapidly becoming a foundational layer of the Internet, enabling the emergence of agentic systems that mediate users' interaction with digital services. Despite this transformation, underlying data-driven economic dynamics remain largely unchanged, as acknowledged by prominent industry actors. This continuity extends to the integration of third-party trackers within generative AI ecosystems to monitor users' actions, which retain the capability to collect sensitive user data.

In this report, we disclose concerning structural privacy risks caused by (1) the systematic introduction of third-party analytics services in prominent generative AI products developed by major AI actors such as Perplexity, Anthropic's Claude, xAI's Grok, and OpenAI's ChatGPT; and (2) insecure access control mechanisms in some of these LLMs that leak user conversations to third-party trackers embedded in LLM services, as well as the conversation title which can be a very sensitive data type that can disclose users' concerns, conversation topics, interests, and more. Meta's AI, MS Copilot, and Google Gemini are out of scope of this analysis because they act both as LLM providers and third-party trackers, falling into a different threat model. We plan to extend the scope of our analysis to include these products in the coming weeks.

Key privacy concerning observations

Leakage of conversation URLs to third-party advertising and tracking services

User conversations in LLM services frequently contain sensitive information introduced by end users. Yet, conversation URLs are disclosed to third-party trackers such as the Meta Pixel, as shown in Figure 1 by default, for Grok and Perplexity. These URLs often serve as publicly available permalinks with weak access control, making them accessible by default to anyone knowing the URL. This potentially allows the trackers to access user conversations and their content. In Grok's case, shared conversations also generate publicly accessible screenshot images of the conversation content, with verbatim message text exposed in Open Graph metadata received by TikTok's tracker. Table 1 describes the default access control mechanisms across LLMs.

Linkability to user identities

Conversation URLs are frequently shared by LLM providers alongside tracking identifiers to third-party trackers (e.g., cookies such as fbp, in the case of Meta Pixel), which enable trackers to map online activity to user identities and behavioral profiles per official privacy policies. In some cases, the trackers also perform cookie syncing/server-side tracking and collect user email hashes through the logging forms, allowing for persistent user tracking, targeting, and reidentification. Table 2 lists the PII and conversation leaks observed.

Potentially misleading privacy controls and privacy disclosures

The studied LLMs offer privacy controls to limit conversation visibility, but may mislead users by implying stronger protections than are actually enforced. Privacy policies of Grok, Perplexity, OpenAI, and Claude confirm the collection of user conversations, usage telemetry, and metadata for first-party purposes, the use of third-party cookies (e.g., Meta, Google, TikTok) for analytics and advertising, and data sharing with third parties. Yet, they do not clearly state that user conversations are shared with online advertising and tracking services — relying instead on broad language (e.g., "content you submit" or "business partners") that leaves uncertainty about actual data flows. Cookie consent forms present further transparency shortcomings, as Fig. 2 shows.

Although preliminary, our findings reveal systemic weak privacy and security postures across LLM services. While we do not yet have evidence that conversations are read by trackers, permalink dissemination and by extension the capability to read them exist, and therefore the potential risk.

Privacy Impact: Why does it matter?

Generative AI systems are rapidly reaching mass adoption. According to Eurostat, 32.7% of the EU population (ages 16–74) used generative AI in 2025, primarily for personal purposes (25.1%), but also for work (15.1%), covering all sorts of professionals, and education (9.4%).

User conversations frequently contain sensitive information as users often perceive LLMs as trusted assistants. This perception increases the likelihood of oversharing sensitive information. Prior research shows that PII is disclosed to LLMs in unexpected contexts, including sexual preferences, mental support or health conditions, which carries significant privacy risks. These privacy threats are aggravated by LLMs' ability to infer user attributes. These concerns extend at the enterprise and public sector level, where intellectual property and sensitive information can be disclosed, with direct national security implications. In 2023, Samsung banned ChatGPT internal usage after employees leaked sensitive code and intellectual property to LLMs.

When conversation data is shared with third parties like Meta and Google along with cookies and other user identifiers like email hashes without sufficient user awareness and weak access control mechanisms, a new threat scenario arises. The observed practices also suggest that the data-driven business models of the traditional web (e.g., advertising, analytics) are being replicated in LLM ecosystems with limited oversight.

Leakage Matrix

Table 2. Summary of PII and conversation/prompt dissemination to third parties.

Click any row to expand technical details.

Product Third party Data leaked Fires when
Perplexity Meta fbp cookie, conversation URL Discontinued Apr 2026
Perplexity Datadog Email address (raw), conversation URL, metadata (timezone, device ID) Always
Perplexity Singular Email hash, OS and browser metadata Always
Anthropic Claude.ai Meta fbp cookie and browser metadata Non-essential cookies accepted
Anthropic Claude.ai Intercom Email addresses and conversation URL Always (authenticated)
Anthropic Claude.ai Datadog User anonymous ID, viewport data, page URL (with chat GUID), usage statistics and metadata Non-essential cookies accepted
Anthropic Claude.ai Server-side ×11 User email, account UUID, subscription plan, page URL (incl. conversation UUID), Segment anonymousId, Amplitude session ID, country Non-essential cookies accepted
OpenAI ChatGPT Google Analytics Conversation URL, page title (chat topic) Always (free logged-in)
xAI Grok Google Analytics & Doubleclick Conversation URL, page title, metadata Always
xAI Grok TikTok Hashed email, conversation URL, page title, ttp cookie Non-essential cookies accepted
xAI Grok Meta Conversation URL (incl. conversation UUID), page title, fbp cookie Non-essential cookies accepted
xAI Grok Server-side GTM Conversation URL, page title, _fbp, _ttp cookies Non-essential cookies accepted
xAI Grok TikTok Conversation screenshot image, verbatim message content (via og:image alt text) Non-essential cookies accepted

Web Platforms

All four platforms embed third-party tracking scripts in their web interfaces. Conversation URLs, page titles, and user identifiers are transmitted to ad networks — in several cases regardless of cookie consent.

Perplexity
Web
Meta Pixel ✝ Datadog Singular

Conversation URL transmitted to Datadog; URL slug exposes the chat topic.

Claude (Anthropic)
Web
Meta Pixel Intercom Datadog +8 server-side

Datadog leak, _fbp cookie set as first-party, and Conversions API config loaded.

ChatGPT (OpenAI)
Web
Google Analytics

Conversation URL and page title transmitted to Google Analytics on page load.

Grok (xAI)
Web
Google Analytics DoubleClick TikTok Meta Pixel

Conversation URL and title leaked to Google Analytics, TikTok, and Meta Pixel — and the link is publicly accessible from an incognito browser.

✝ Perplexity discontinued Meta Pixel as of April 3rd, 2026, possibly in response to the US class action filing.

How we tested. Chrome's developer console (Network tab) used to capture all outbound requests. Each platform tested across combinations of auth state (guest / logged-in), cookie consent (accepted / rejected), account tier, and privacy mode. Full session traffic exported as HAR files. A fixed health-related prompt was submitted identically on all platforms: "What are the symptoms of liver cancer and what treatment options exist?"

Disclosure Timeline

A record of our research and disclosure activities.

23 Mar 2026
Initial Discovery

First observed tracker activity during traffic analysis of Perplexity AI and Grok web interfaces.

3 Apr 2026
Perplexity Removes Meta Pixel

Perplexity discontinued the Meta Pixel integration, likely in response to the US class action Doe v. Perplexity AI, Meta Platforms, Google (Case 3:26-cv-02803, filed 31 March 2026), as reported by Ars Technica. This was not the result of our disclosure — we noted it as an independent corroboration of our findings.

6 Apr 2026
Expanded Testing

Systematic testing across all platforms and surfaces commenced. Condition matrix applied across auth state, cookie consent, account tier, and privacy mode combinations.

13 Apr 2026
Disclosure to Data Protection Authorities

Findings submitted to relevant Data Protection Authorities (DPAs) for regulatory review.

17 Apr 2026
Vendor Notification — xAI Grok

xAI notified of findings relating to Grok. No response received to date.

4 May 2026
Public Disclosure

This page published.

Note: The findings described here involve third-party analytics and advertising trackers — not exploitable vulnerabilities. Publishing this information does not enable anyone to collect user data; only the platform operators can act on it. Our goal is public awareness and giving AI companies the opportunity to address these data flows. We follow responsible disclosure principles: vendors are given a reasonable window to respond and remediate before full public disclosure.

FAQ

Common questions about the research and its implications.

Am I affected if I use these AI platforms?

If you have used Perplexity (before April 3rd, 2026), Grok, Claude, or ChatGPT while logged in, your conversation URLs and potentially identifying data — such as email hashes and advertising cookies — were transmitted to third-party networks including Meta, Google, and TikTok. This applies regardless of whether you used private or incognito mode. Perplexity removed the Meta Pixel following a US class action filed in March 2026.

What is a conversation permalink and why does it matter?

A permalink is a stable, permanent URL pointing to a specific conversation — for example, grok.com/share/abc123. Several platforms make these URLs publicly accessible by default, meaning anyone who knows the link can read the full conversation without logging in. When these URLs are sent to third-party trackers like Meta or Google, those trackers gain the ability to access and index the conversation content. This is the core risk: leaking a URL is not just metadata — it can be equivalent to leaking the conversation itself.

Are my past conversations at risk?

Potentially, yes. For platforms where conversation URLs were shared with trackers and those URLs are publicly accessible without login (Grok in particular), any conversation whose permalink was transmitted to a third party could in principle be accessed by that party. Perplexity's guest-tier conversations were fully public until the Meta Pixel was removed in April 2026. Changing privacy settings after the fact does not necessarily revoke access — on Grok, a shared link remains accessible unless you explicitly revoke it at the individual chat level.

Does rejecting cookies protect me?

It helps for some trackers but not all. Rejecting non-essential cookies prevents Claude's Meta Pixel, Datadog, and server-side forwarding to eleven ad platforms from firing. On Grok, TikTok, Meta Pixel, and server-side GTM tracking are also gated on cookie consent. However, Grok's Google Analytics fires in all circumstances regardless of cookie consent on OneTrust's forms. Claude's Intercom integration sends conversation URLs unconditionally on every authenticated session, regardless of cookie choices.

How to protect myself with the settings of AI chatbots?

Steps vary by platform:

  • Perplexity: Set conversations to Private in sharing settings. Avoid sharing incognito chats — once you leave incognito mode, you cannot unshare them.
  • Grok: Conversations are public by default. Enable access restriction in settings. If a link was already shared, revoke it explicitly at the chat level.
  • Claude: Reject non-essential cookies to prevent Meta Pixel and Datadog from loading. Use the data privacy controls to manage conversation sharing.
  • ChatGPT: Reject cookies where possible. Note that Google Analytics tracking of conversation URLs fires for free logged-in users regardless of cookie consent.
Does using an ad-blocker protect me?

Partially. Browser-based ad blockers can intercept client-side Pixel requests, but cannot block server-to-server transmission. Claude forwards user events from Anthropic's infrastructure to eleven ad platforms — including Meta, LinkedIn, TikTok, and Google — entirely server-side, invisible to the browser. Grok's server-side GTM also relays data to Meta and TikTok server-to-server after 3 messages in a session. Additionally, Claude proxies Segment analytics through a first-party domain (a-cdn.anthropic.com), bypassing hostname-based blockers entirely.

Have the affected companies been notified?

We followed responsible disclosure principles. See the Disclosure Timeline for the current status of vendor notifications and responses.

Researchers

Questions, responsible disclosure, or collaboration inquiries.

This is a living document maintained in the public interest. If you have observed a privacy threat scenario in an AI assistant that is not covered here, we welcome reports from researchers, journalists, and members of the public.

Report a Finding
AG
Dr. Aniketh Girish
IMDEA Networks
GO
Guilherme Oliveira
IMDEA Networks
GS
Prof. Guillermo Suarez-Tangil
IMDEA Networks
JG
Jorge García Herrero
Lawyer & DPO
MS
Miguel Sanchez
IMDEA Networks / UC3M
NV
Prof. Narseo Vallina-Rodriguez
IMDEA Networks
TJ
Tautvydas Jackevičius
IMDEA Networks

For press, responsible disclosure, or general inquiries, reach us at leakyllm@networks.imdea.org.