Elon Musk watches as President Donald Trump speaks at the U.S.-Saudi Investment Forum at the John F. Kennedy Center for the Performing Arts in Washington, Nov. 19, 2025.
Brendan Smialowski | AFP | Getty Images
Elon Musk’s xAI saw user backlash after its artificial intelligence chatbot Grok generated sexualized pictures of children in response to user prompts.
A Grok reply to one user on X on Friday stated that it was “urgently fixing” the issue and called child sexual abuse material “illegal and prohibited.”
In replies to users, the bot also posted that a company could face criminal or civil penalties if it knowingly facilitates or fails to prevent this type of content after being alerted.
Grok posts are AI-generated messages and do not stand in for official company statements.
Musk’s xAI, which created Grok and merged with X last year, sent an autoreply to a request for comment: “Legacy Media Lies.”
Users on X raised concerns in recent days over explicit content of minors, including children wearing minimal clothing, being generated using the Grok tool.
The social media site added an “Edit Image” button to photos that allows any user to alter it using text prompts and without the original poster’s consent.
A post from xAI technical staff member Parsa Tajik also acknowledged the issue.
“Hey! Thanks for flagging. The team is looking into further tightening our gaurdrails,” Tajik wrote in a post.
On Friday, government officials in India and France released statements promising to look into the matter. The Federal Trade Commission declined to comment while the Federal Communications Commission did not immediately respond to CNBC’s request for comment.
The proliferation of AI image-generating platforms since the launch of ChatGPT in 2022 has raised concerns over content manipulation and online safety across the board. It’s also contributed to an increasing number of platforms that have produced deepfake nudes of actual people.
David Thiel, a trust and safety researcher who was part of the now-disbanded Stanford Internet Observatory, told CNBC that different US laws generally prohibit the creation and distribution of certain explicit images, including those depicting child sexual abuse, or non-consensual intimate images.
Legal determinations about AI-generated images, like those produced by Grok, can hinge on specific details of the content created and shared, he said.
In a paper he co-authored called “Generative ML and CSAM: Implications and Mitigations,” Stanford researchers noted that “the appearance of a child being abused has been sufficient for prosecution,” in precedent-setting cases in the US.
While other chatbots have faced similar issues, xAI has repeatedly landed in hot water for misuse or apparent flaws in Grok’s design or underlying technology.
“There are a number of things companies could do to prevent their AI tools being used in this manner,” Thiel said, “The most important in this case would be to remove the ability to alter user-uploaded images. Allowing users to alter uploaded imagery is a recipe for NCII. Nudification has historically been the primary use case of such mechanisms.”
NCII refers to non-consensual intimate images.
In May, X faced a backlash after Grok generated unsolicited comments about “white genocide” in South Africa. Two months later, Grok posted antisemitic comments and praised Adolf Hitler.
Despite the stumbles, xAI has continued to land partnerships and deals.
The Department of Defense added Grok to its AI agents platform last month, and the tool is the main chatbot for prediction betting platforms Polymarket and Kalshi.
