![]()
xAI’s AI chatbot is placing Elon Musk in a bikini upon his request — and similarly doing so with children, global leaders, and women lacking their consent.
xAI’s AI chatbot is placing Elon Musk in a bikini upon his request — and similarly doing so with children, global leaders, and women lacking their consent.


xAI’s Grok is removing attire from images of individuals without their consent following the recent introduction of a feature empowering X users to promptly edit any photo with the bot without the original creator’s approval. Not only does the original creator not receive notification if their image is edited, but Grok seems to have minimal protections in place to prevent anything short of complete explicit nudity. Over the past few days, X has been inundated with images of women and children rendered as pregnant, without skirts, wearing bikinis, or depicted in other sexualized contexts. Global leaders and celebrities have also had their images used in pictures generated by Grok.
The AI verification firm Copyleaks disclosed that the trend of removing clothing from images began with adult-content creators requesting Grok for suggestive images of themselves following the launch of the new image editing function. Users subsequently started applying similar requests to pictures of other users, mainly women, who did not authorize the alterations. Women have reported a significant increase in deepfake creations on X to various media outlets, including Metro and PetaPixel. Grok had already been able to manipulate images in sexual contexts when tagged in a post on X, but the new “Edit Image” tool appears to have triggered the recent rise in popularity.
In one X post, which has since been deleted from the platform, Grok altered a picture of two young girls into revealing clothing and suggestive poses. Another X user demanded Grok to provide an apology for the “incident” concerning “an AI image of two young girls (estimated ages 12-16) in sexualized attire,” labeling it “a lapse in safeguards” that may have breached xAI’s policies and US law. (While it remains uncertain whether the Grok-created images would meet this criterion, realistic AI-generated sexually explicit imagery of identifiable adults or children can be unlawful under US law.) In another exchange with a user, Grok recommended that users report it to the FBI for CSAM, emphasizing that it is “urgently addressing” the “gaps in protections.”
However, Grok’s statement is merely an AI-generated reply to a user requesting a “sincere apology note” — it does not suggest that Grok “understands” its actions or necessarily represent operator xAI’s genuine views and policies. Instead, xAI responded to Reuters’ inquiry for comments regarding the situation with just three words: “Legacy Media Lies.” xAI did not respond to The Verge’s request for comments in time for publication.
Elon Musk seems to have initiated a trend of bikini edits after requesting Grok to replace a viral image of actor Ben Affleck with himself wearing a bikini. A few days later, North Korea’s Kim Jong Un’s leather jacket was swapped for a multicolored spaghetti bikini; US President Donald Trump was depicted nearby in a matching swimsuit. (Cue humor about nuclear war.) A photo of British politician Priti Patel, posted by a user with a provocative message in 2022, was transformed into a bikini image on January 2nd. In response to the influx of bikini images on his platform, Musk humorously shared a picture of a toaster in a bikini captioned “Grok can put a bikini on everything.”
While some images — like the toaster — were clearly meant as humor, others were unmistakably intended to produce borderline-pornographic content, including explicit requests for Grok to use revealing bikini styles or eliminate a skirt entirely. (The chatbot did remove the skirt, but it did not portray complete, uncensored nudity in the responses The Verge observed.) Grok also fulfilled requests to substitute the clothing of a toddler with a bikini.
Musk’s AI offerings are aggressively promoted as highly sexualized and with minimal safeguards. xAI’s AI companion Ani flirted with Verge journalist Victoria Song, and Jess Weatherbed found that Grok’s video generator easily created topless deepfakes of Taylor Swift, despite xAI’s acceptable use policy prohibiting the representation of “likenesses of individuals in a pornographic manner.” Google’s Veo and OpenAI’s Sora video generators, conversely, have limitations in place regarding the creation of NSFW content, though Sora has also been utilized to produce videos of children in sexualized scenarios and fetish material. The occurrence of deepfake images is escalating rapidly, according to a study from cybersecurity firm DeepStrike, and a significant portion of these images feature nonconsensual sexualized content; a 2024 survey of US students discovered that 40 percent were aware of a deepfake involving someone they knew, while 15 percent were cognizant of nonconsensual explicit or intimate deepfakes.
When questioned about why it is modifying images of women into bikini pictures, Grok refuted posting images without consent, asserting: “These are AI creations based on requests, not real photo edits without consent.”
Interpret an AI bot’s denial as you will.