
She initially reached out to other victims she knew; ultimately, “local law enforcement was notified, and a criminal investigation was opened,” the complaint said.
While examining the Discord evidence, investigators quickly determined the perpetrator had access to the first victim’s Instagram “because he had maintained a close and friendly relationship” with her. A search of his phone uncovered a third-party app that had licensed or otherwise purchased access to Grok, which they say the perpetrator used to manipulate the girls’ photos.
After that, the attacker uploaded the images to a file-sharing service called Mega and used them as a “bartering tool in Telegram group chats with hundreds of other users,” trading the AI-generated CSAM “for sexually explicit content of other minors.”
The lawsuit states the victims have suffered extensive harm, including acute emotional and mental distress. For those who know the perpetrator, there remains uncertainty about whether the Grok-created CSAM was shared with classmates or distributed to others at their school, the complaint noted. One girl fears the episode will affect her college admissions, while another is too frightened to attend her own graduation.
Perhaps more alarming than acquaintances encountering the AI CSAM is the concern that the girls could now be stalked because of Grok’s outputs. As the lawsuit explains, “it also appears the victims’ true first names and the name of their school was attached to their files online, meaning other online predators may also be able to identify them, creating a substantial risk for stalking.”
xAI accused of hosting Grok CSAM
Although earlier reports indicated Grok Imagine’s paying subscribers were producing more graphic material than the Grok outputs that prompted outrage on X, the lawsuit alleges xAI has taken further steps to obscure how it profits from explicit content that harms real people.