Women and girls are taking Grok to court over sexualized AI deepfakes – The 19th News


* Please read before republishing *
We’re happy to make this story available to republish for free under an Attribution-NonCommercial-NoDerivatives Creative Commons license as long as you follow our republishing guidelines, which require that you credit The 19th and retain our pixel. See our full guidelines for more information.
To republish, simply copy the HTML at right, which includes our tracking pixel, all paragraph styles and hyperlinks, the author byline and credit to The 19th. Have questions? Please email [email protected].
— The Editors

Loading…
Look for a confirmation sent to
The email didn’t go through.

We’re an independent, nonprofit newsroom reporting on gender, politics, policy and power. Read our story.
Look for a confirmation sent to
The email didn’t go through.
We’re an independent, nonprofit newsroom reporting on gender, politics, policy and power. Read our story.
Sign up for our newsletter
The 19th thanks our sponsors. Become one.
Technology
A new lawsuit filed Monday joins two others centered around nonconsensual explicit images allegedly made by the AI chatbot.
Jasmine Mithani
Technology Reporter
Published
Republish this story
Share
Republish this story
On Monday, three girls filed a class-action lawsuit against xAI alleging that the company’s Grok AI tool was used to generate child sexual abuse material from their photos.  
This new civil case joins at least two others revolving around nonconsensual deepfakes filed against xAI, which was founded by billionaire Elon Musk. Those earlier cases are centered around nonconsensual deepfakes posted on X, the social media platform also owned by Musk, while this new complaint involves a third-party app that relied on Grok AI to make images.  
Grok image generation debuted on X in December, and immediately users found ways to generate sexually explicit images despite nudity being banned. Grok generated over 4.4 million images over nine days, per a review by The New York Times, and 1.8 million of those were sexualized depictions of women. Researchers at the nonprofit  Center for Countering Digital Hate estimated Grok made 23,000 sexualized images of children over 11 days. 
The 19th thanks our sponsors. Become one.
While some users created images of themselves, many of the images were of people who had no idea they were being digitally undressed on social media.
So far the three lawsuits are the only path forward for justice for the thousands, if not millions, of people who have been victimized in this way by Grok. Federal prosecutors have yet to pursue a criminal case under the Take It Down Act, which bans the publication of nonconsensual intimate imagery. (In fact, even as Grok’s deepfakes drew international investigations, the Pentagon announced it would integrate Grok in January, after securing a $200 million contract the year prior.) All three cases are civil actions seeking damages.
“We have no mechanisms for holding accountable, for demanding transparency, for demanding information from these companies, and they are incredibly resistant to taking responsibility when their platforms cause harm,” said Imran Ahmed, the founder and CEO of nonprofit Center for Countering Digital Hate. 
“Ensuring that your platform isn’t an industrial-scale machine for sexual abuse of women and children would seem like a no-brainer for someone seeking to launch a consumer platform,” Ahmed said.
xAI did not reply to a request for comment. 
All of the cases allege negligence on the part of xAI in releasing Grok. Each case alleges that xAI did not undertake industry-standard testing or implement common guardrails to prevent nonconsensual explicit images or child sexual abuse material from being generated. 
xAI debuted the Grok chatbot in 2023, and Musk advertised it as an antidote to other chatbots, which he said were infected with the “woke-mind virus.” From the start, developers said Grok would reply to “spicy” questions that other apps would refuse to answer. This assertion has come back to haunt the company in these lawsuits, as it is being used to demonstrate negligence.
Grok exists as a standalone app and is accessible through the social media platform X, formerly known as Twitter. On December 20, 2025, Musk announced that Grok could be prompted to edit and generate images on X. Deepfake abuse exploded on the platform, with many politicians and civil society watchdogs raising the alarm. After weeks of little action, the social media giant said image generation would be limited to paid X accounts — essentially monetizing nonconsensual deepfakes, critics argued. (Also, it didn’t completely stop free X accounts from making images with Grok.)
On January 14, Musk posted on X that he was “not aware of any naked underage images generated by Grok,” but said “adversarial hacking” could lead to unexpected results that would be immediately fixed. Two of the three lawsuits allege Grok created sexually explicit, if not fully nude, images of kids before this date.
Two of the current cases stem from the rollout of Grok’s image generator on X. Ashley St. Clair, a political influencer and mother of one of Musk’s 14 publicly acknowledged children, sued xAI on January 15 after users prompted Grok to make sexually explicit images of her. St. Clair says some of the images modify a photo of her at 14, creating AI-generated child sexual abuse material.
Jane Doe, the plaintiff in the class-action lawsuit filed January 23, is a woman in South Carolina who says the Grok account posted an AI-generated image of her in a revealing bikini without her consent. She said X refused to take the image down after she originally reported it, and she was only able to get it removed after reporting it many times over three days. She said she had to take unpaid time off work and lives in fear that the image will resurface and cost her professional opportunities. 
Unlike the prior two cases, the class-action suit filed Monday focuses on child sexual abuse material allegedly made on an app that licensed the Grok Imagine API. Three students in Tennessee, two minors and one whose deepfakes were sourced from images of her when she was under 18, discovered that someone created AI-generated child sexual abuse material from images they posted on social media. The accused allegedly distributed these images alongside the first names of the victims and the name of their school, heightening the risk of physical harm. The accused was arrested in December, and the complaint says the plaintiffs, known as Jane Does 1, 2 and 3, have suffered severe anxiety, particularly at school.
But Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI and expert on laws concerning AI-generated child sexual abuse material, said this lawsuit seems like “suing xAI on hard mode” because the complaint doesn’t directly tie deepfakes of two of the plaintiffs to Grok.
There are a few other challenges these cases against xAI face. Legally, Pfefferkorn said xAI has indicated it will try to get the cases dismissed under Section 230 of the Communications Decency Act, which says platforms are not liable for what their users post unless it is related to sex trafficking. (The case filed Monday alleges xAI violated the Trafficking Victims Protection Act.) This is an area of open legal debate, as while humans presumably prompted Grok to generate images, the @grok account posted the deepfakes.
These cases could be an opportunity to clarify that AI platforms should be held liable for speech they generate, Ahmed said. “Our legal framework needs to change to deal with the new realities,” he said, advocating for Section 230 reform.
The federal DEFIANCE Act, if passed, could provide another pathway for monetary compensation for victims of deepfakes. The bill cleared the Senate, and advocates are rallying for a House floor vote
Monday’s class-action suit was filed in the Northern District of California, like the other class action. St. Clair filed her case in the Southern District of New York, where she lives. The two earlier cases are tied up in requests to be litigated in Texas. xAI claims that users of Grok agreed to terms of service that required all cases to be filed in the Northern District of Texas, home to a conservative judiciary that on at least one occasion has ruled in Musk’s favor.
Republish this story
Share
Sign up for our newsletter
Learn more about membership.
The 19th is a reader-supported nonprofit news organization. Our stories are free to republish with these guidelines.

source