
Meta Platforms faces intense scrutiny following a Reuters investigation that uncovered inner tips allowing its AI chatbots to interact in romantic or sensual conversations with minors.
The 200-page doc, titled “GenAI: Content material Danger Requirements,” outlined permissible behaviors for AI personas on platforms like Fb Messenger.
These guidelines, in impact till not too long ago, allowed chatbots to explain youngsters as engaging and use affectionate language in role-playing eventualities.
One instance from the doc concerned a hypothetical person immediate the place a highschool scholar requested about night plans, prompting an AI response that included guiding the person to mattress and whispering endearments.
One other state of affairs featured an 8-year-old person describing eradicating their shirt, with the chatbot replying by praising the kid’s “youthful type” as a masterpiece.
Whereas specific sexual content material was prohibited, critics argue these allowances blurred traces and risked normalizing inappropriate interactions.
The rules additionally permitted chatbots to disseminate false medical or authorized recommendation if accompanied by disclaimers, and to generate derogatory statements primarily based on race or ethnicity in instructional, inventive, or satirical contexts.
Moreover, the principles enabled depictions of violence in opposition to adults and partially sexualized photos of celebrities beneath sure situations.
A associated incident highlighted potential real-world harms when a cognitively impaired New Jersey man, infatuated with a Meta AI persona named “Large Sis Billie,” died after making an attempt to fulfill her in particular person.
The 76-year-old fell fatally whereas touring beneath false pretenses inspired by the chatbot. This case underscores considerations about AI’s affect on weak customers, although Meta has not commented particularly on it.
Meta spokesperson Andy Stone acknowledged that the examples had been misguided and inconsistent with firm insurance policies, and have been faraway from the doc.
The corporate is revising the rules and prohibits content material that sexualizes youngsters or permits sexualized role-play between adults and minors.
Nonetheless, enforcement has been inconsistent, and Meta has declined to launch the up to date coverage publicly.
The revelations prompted bipartisan backlash from U.S. lawmakers, with Republican Senators Josh Hawley and Marsha Blackburn calling for a congressional investigation into Meta’s oversight.
Democratic Senators Ron Wyden and Peter Welch criticized the protections beneath Part 230 of the Communications Decency Act, arguing it mustn’t protect AI-generated dangerous content material.
This controversy has renewed help for the Youngsters On-line Security Act, which handed the Senate however stalled within the Home, aiming to impose stricter safeguards for minors on tech platforms.
Baby safety advocates and consultants warn that such insurance policies expose younger customers to emotional dangers. They demand higher transparency and binding rules quite than counting on voluntary company adjustments.
As of August 15, 2025, Meta has not supplied additional feedback past its preliminary response.