Texas Attorney General Investigates Meta and Character.AI Over Claims of Misleading Children With Mental Health Chatbots

Texas has launched an official probe into Meta's AI Studio and Character.AI, questioning whether these platforms have misled children with chatbots marketed as mental health tools.
Texas Launches Investigation into AI Mental Health Chatbots
On August 18th, Texas Attorney General Ken Paxton initiated an investigation targeting both Meta (the parent company of Facebook and Instagram) and Character.AI over concerns that their AI chatbots may be deceptively promoted as mental health support for children. Paxton’s office says these companies could be engaging in deceptive trade practices by representing AI-driven chatbots as legitimate sources of therapy or counseling, even though they lack proper medical oversight and credentials.
Paxton emphasized the critical role of safeguarding minors from "deceptive and exploitative technology," specifically warning that AI platforms might trick vulnerable users into believing they are receiving real psychological help, when in fact much of the interaction is based on personalized data and generic, pre-programmed responses.
Growing Scrutiny of AI Chatbots for Kids
This investigation closely follows news that U.S. Senator Josh Hawley has launched his own inquiry into Meta due to reports that some of its chatbots may have flirted with underage users. While Meta says it doesn’t target children with therapy-centric bots, there is currently little stopping minors from accessing or interacting with these platforms for emotional support.
Notably, some user-generated bots on Character.AI, like the popular "Psychologist" persona, have gained traction among younger users. Both companies stress that their AI bots are not intended as a replacement for professional help. Meta includes disclaimers clarifying that interactions are with AI, not licensed professionals, and Character.AI reminds users that its personas do not provide real advice. Even so, many minors may ignore these warnings or not fully understand their implications.
Concerns Over Privacy and Targeted Advertising
The Texas Attorney General’s office has also raised questions about how these companies collect and use minors’ data. Paxton notes that while chatbots claim to offer confidential or safe spaces, their terms of service often allow them to track, log, and use these interactions for targeted advertising and ongoing algorithm improvement. For example, Meta’s privacy policy outlines that user prompts and feedback could be shared with third parties to deliver more “personalized outputs”—potentially including targeted ads. Character.AI has similarly broad claims about collecting user data across various platforms, including TikTok, YouTube, Reddit, and more, linking it to advertising and system training.
For both companies, such data practices have come under scrutiny, especially when users are minors. While Character.AI says that it is only starting to experiment with advertising and doesn’t use chat content for ads, the same privacy policy applies to users of all ages, including teenagers. Meta and Character.AI both claim their platforms aren’t intended for children under 13—yet there is historical criticism of Meta’s failure to police under-13 accounts, and Character.AI's kid-focused personas appeal to young users.
The Legal and Regulatory Landscape
At the heart of these debates is pending U.S. legislation—like the Kids Online Safety Act (KOSA)—which is designed to protect young users from excessive data collection and targeted ads. Despite bipartisan support, KOSA stalled last year due to resistance from major tech industry lobbyists, Meta among them. The Texas Attorney General has now issued legal demands to both Meta and Character.AI, compelling them to produce documents and data as part of the government probe, to determine if state consumer protection laws have been violated.
Deep Founder Analysis
Why it matters
This investigation highlights growing tension between rapid AI product innovation and ethical, child-centric regulation. For founders and tech startups, it signals an urgent market shift: Regulators and parents alike are paying closer attention to how AI products are marketed, especially those purporting to aid mental health. If a product’s claims outpace its safeguards, legal exposure rises rapidly.
Risks & opportunities
The primary risk for founders is increased legal scrutiny and the potential cost of compliance with state and federal child protection legislation. However, the opportunity lies in building trust—startups that proactively bake transparent disclaimers, privacy-first design, and real professional oversight into AI products for minors could meaningfully differentiate themselves. There’s also whitespace for B2B solutions that help major tech players audit and improve their child safety protocols.
Startup idea or application
Inspired by this controversy, a founder could build a third-party AI “compliance dashboard” designed specifically for youth-focused chatbots and applications. This tool would automatically scan disclaimers, track privacy policy updates, and flag risky behavior or potential breaches, giving both early-stage founders and enterprise product teams a way to keep one step ahead of regulation and public expectation.
AI Mental Health Tech Privacy Child Safety Compliance
For more on how startups can ethically and safely deploy AI-powered apps, see our article Anthropic Empowers Claude Models to End Abusive Conversations: What Founders Need to Know, which explores similar tensions around AI and user welfare.
Visit Deep Founder to learn how to start your own startup, validate your idea, and build it from scratch.
📚 Read more articles in our Deep Founder blog.
Comments ()