Briefly
Three Tennessee minors have sued xAI, alleging Grok generated CSAM from their actual images and unfold it on-line, inflicting extreme hurt.
The submitting claims xAI knowingly launched Grok with out safeguards and profited from its misuse, calling it a “enterprise alternative.”
Filed amid international probes, the alleged victims are looking for $150,000 per violation plus damages and an injunction.
Three Tennessee minors have sued Elon Musk’s xAI in a federal class motion, alleging Grok generated little one sexual abuse materials utilizing their actual pictures and that the corporate knowingly designed its AI chatbot with out industry-standard safeguards, then profited from the end result.
The lawsuit, filed Monday within the Northern District of California, claims Grok was used to create and distribute AI-generated little one sexual abuse materials (CSAM) utilizing their actual photographs.
The minors, recognized as Jane Doe 1, 2, and three, mentioned the altered content material was shared throughout platforms, together with Discord, Telegram, and file-sharing websites, inflicting lasting emotional misery and reputational hurt.
“xAI—and its founder Elon Musk—noticed a enterprise alternative: a possibility to revenue off the sexual predation of actual folks, together with kids,” the lawsuit reads. “Figuring out the kind of dangerous, unlawful content material that might—and would—be produced, xAI launched Grok, a generative synthetic intelligence mannequin with picture and video-making options that might reply to prompts to create sexual content material with an individual’s actual picture or video.”
The alleged victims describe incidents between mid-2025 and early 2026, when their actual images had been altered into express photographs and circulated on-line.
In a single occasion, one of many victims was alerted by an nameless consumer who discovered folders of AI-generated content material being traded amongst a whole bunch of customers.
They allege a perpetrator accessed Grok by means of a third-party software that had licensed xAI’s know-how, a construction the submitting says xAI intentionally used to distance itself from legal responsibility whereas persevering with to revenue from the underlying mannequin.
On the peak of public backlash in January, Musk wrote on X that he was “not conscious of any bare underage photographs,” including that “when requested to generate photographs, it’s going to refuse to supply something unlawful.”
In response to a discovering by the Middle for Countering Digital Hate, cited within the lawsuit, Grok produced an estimated 23,338 sexualized photographs of youngsters between December 29, 2025, and January 9 of this yr, roughly one each 41 seconds.
The alleged victims are looking for damages of not less than $150,000 per violation underneath Masha’s Legislation, together with disgorgement of revenues, punitive damages, attorneys’ charges, and a everlasting injunction, in addition to restitution of income underneath California’s Unfair Competitors Legislation.
Lawsuits stacking up
The lawsuit is among the first to carry an AI firm instantly answerable for the alleged manufacturing and distribution of AI-generated CSAM depicting identifiable minors, and arrives as Grok faces simultaneous investigations throughout the U.S., EU, UK, France, Eire, and Australia.
“When a system is deliberately designed to govern actual photographs into sexualized content material, the downstream abuse will not be an anomaly—it’s a foreseeable consequence,” Even Alex Chandra, a companion at IGNOS Legislation Alliance, instructed Decrypt.
Chandra mentioned courts might not settle for a easy platform protection, noting a generative AI system could possibly be “handled as a platform when it comes to consumer interplay” however “evaluated as a product” when assessing security design, with “significantly strict scrutiny” utilized in CSAM circumstances resulting from heightened little one safety obligations.
He additionally mentioned courts will possible deal with safeguards, noting the corporate could also be anticipated to indicate “danger assessments and safety-by-design measures earlier than deployment,” together with guardrails that actively block dangerous outputs.
Decrypt has reached out to Musk through xAI and SpaceX for remark.
Day by day Debrief Publication
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.