Meta Patches Major AI Privacy Flaw That Exposed User Prompts and Responses

Meta recently addressed a significant security vulnerability in its AI chatbot, Meta AI, which exposed users' private prompts and AI-generated content to others.
The Discovery of the Vulnerability
Sandeep Hodkasia, founder of the security company Appsecure, uncovered the issue and promptly reported it to Meta, earning a $10,000 bug bounty. The bug, reported in December 2024, allowed anyone to manipulate a unique prompt identifier and access the AI chat history of other users—posing serious privacy concerns. Meta confirmed that the bug was fixed in January 2025 and that no evidence of malicious exploitation was found.
How the Flaw Worked
The vulnerability was linked to the way Meta AI handled prompt editing. When a user edited a prompt, Meta’s servers assigned it, along with its AI-generated response, a unique numerical identifier. By tinkering with this number in the browser’s network traffic, Hodkasia discovered it was possible to retrieve another user's prompt and its associated content. The identifiers used were sequential and guessable, meaning a bad actor could easily script a tool to harvest vast amounts of user data through "number scraping." Critically, Meta’s system failed to verify if the person making the request was authorized to view the prompt in question.
Meta’s Response and Broader Context
After being alerted, Meta patched the bug within a month and reported that its investigation showed no signs of abuse. This incident surfaces at a time when leading tech firms are rapidly rolling out and refining AI products, often outpacing their security and privacy reviews. Notably, the launch of Meta AI’s standalone app earlier in the year was met with user confusion, with some accidentally making their conversations public, further highlighting the privacy challenges inherent in consumer-facing AI.
Deep Founder Analysis
Why it matters
This incident highlights a crucial shift for startups and founders competing in the AI space: user trust and data privacy are rapidly becoming core differentiators, not just afterthoughts. As AI systems become more embedded in personal and business workflows, protecting user inputs and outputs from exposure is essential. Any breach or vulnerability can instantly erode confidence and jeopardize growth, especially for early-stage platforms that lack Meta’s resources for reputation recovery.
Risks & opportunities
The most immediate risk is regulatory scrutiny targeting AI privacy lapses. High-profile incidents like this could prompt stricter rules or oversight for AI startups. However, there is also a significant opportunity: startups that design privacy-first AI architectures, or that can emphasize visible, transparent data handling, could win users from legacy platforms seen as complacent. Historical parallels can be drawn from the rise of privacy-centric messaging apps like Signal, which carved out strong market positions by leading on security.
Startup idea or application
A compelling opportunity inspired by this event is a platform offering independent vulnerability assessments for AI-driven applications. The platform could provide automated scanning to detect logic flaws in prompt or session handling, coupled with real-time monitoring of backend data access. Alternatively, offering "privacy as a service"—embedding robust prompt isolation, user permissioning, and secure prompt storage as plugins—could allow smaller teams to launch competitive AI features without compromising on privacy.
AI Privacy Cybersecurity Meta AI Startup Insights Vulnerability
Visit Deep Founder to learn how to start your own startup, validate your idea, and build it from scratch.
📚 Read more articles in our Deep Founder blog.
Comments ()