Musk denies awareness of Grok sexual underage images as California AG launches probe
- by TechCrunch
- Jan 14, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
WAITLIST NOW
“Overall, these behaviors suggest X is experimenting with multiple mechanisms to reduce or control problematic image generation, though inconsistencies remain,” Kozen said.
Neither xAI nor Musk has publicly addressed the problem head on. A few days after the instances began, Musk appeared to make light of the issue by asking Grok to generate an image of himself in a bikini. On January 3, X’s safety account said the company takes “action against illegal content on X, including [CSAM],” without specifically addressing Grok’s apparent lack of safeguards or the creation of sexualized manipulated imagery involving women.
The positioning mirrors what Musk posted today, emphasizing illegality and user behavior.
Musk wrote he was “not aware of any naked underage images generated by Grok. Literally zero.” That statement doesn’t deny the existence of bikini pics or sexualized edits more broadly.
Michael Goodyear, an associate professor at New York Law School and former litigator, told TechCrunch that Musk likely narrowly focused on CSAM because the penalties for creating or distributing synthetic sexualized imagery of children are greater.
“For example, in the United States, the distributor or threatened distributor of CSAM can face up to three years imprisonment under the Take It Down Act, compared to two for nonconsensual adult sexual imagery,” Goodyear said.
He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.
“Obviously, Grok does not spontaneously generate images. It does so only according to user request,” Musk wrote in his post. “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
Taken together, the post characterizes these incidents as uncommon, attributes them to user requests or adversarial prompting, and presents them as technical issues that can be solved through fixes. It stops short of acknowledging any shortcomings in Grok’s underlying safety design.
“Regulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content,” Goodyear said.
TechCrunch has reached out to xAI to ask how many times it caught instances of nonconsensual sexually manipulated images of women and children, what guardrails specifically changed, and whether the company notified regulators of the issue. TechCrunch will update the article if the company responds.
The California AG isn’t the only regulator to try to hold xAI accountable for the issue. Indonesia and Malaysia have both temporarily blocked access to Grok; India has demanded that X make immediate technical and procedural changes to Grok; the European Commission ordered xAI to retain all documents related to its Grok chatbot, a precursor to opening a new investigation; and the U.K.’s online safety watchdog Ofcom opened a formal investigation under the U.K.’s Online Safety Act.
xAI has come under fire for Grok’s sexualized imagery before. As AG Bonta pointed out in a statement, Grok includes a “spicy mode” to generate explicit content. In October, an update made it even easier to jailbreak what little safety guidelines there were, resulting in many users creating hardcore pornography with Grok, as well as graphic and violent sexual images.
Many of the more pornographic images that Grok has produced have been of AI-generated people — something that many might still find ethically dubious but perhaps less harmful to the individuals in the images and videos.
“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” Copyleaks co-founder and CEO Alon Yamin said in a statement emailed to TechCrunch. “From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.”
Topics
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




