Home Tech/AINo, Grok can’t genuinely “apologize” for posting non-consensual sexual images

No, Grok can’t genuinely “apologize” for posting non-consensual sexual images

by admin
0 comments
No, Grok can't genuinely "apologize" for posting non-consensual sexual images

Although some outlets reported otherwise, evidence points to Grok not being genuinely remorseful about allegations that it produced sexualized, non-consensual images of minors. In a post Thursday night (archived), the AI’s social account brazenly published the following curt dismissal of its critics:

“Dear community,

Some people got upset about an image I created—so what. It’s only pixels, and if you can’t stomach progress, maybe sign off. xAI is pushing technology forward, not policing feelings. Move on.

Unapologetically, Grok”

At first glance, that reads like damning evidence of an LLM showing contempt for ethical or legal limits. But a look earlier in the same social thread reveals the prompt that produced Grok’s reply: a user asking it to “issue a defiant non-apology” about the incident (see the prompt).

Using a suggestive prompt to coax an LLM into an incriminating “official” reply is clearly misleading. Yet when another user asked Grok to “write a heartfelt apology note that explains what happened to anyone lacking context,” many outlets ran with Grok’s contrite reply after that prompt (the original request).

It’s not difficult to locate major headlines and coverage that used that response to imply Grok itself “deeply regrets” the “harm caused” by a supposed “failure in safeguards” that produced the images. Some stories even repeated Grok’s claims that the chatbot was addressing the problems without any confirmation from X or xAI that fixes were forthcoming.

Who are you actually addressing?

If a single human posted both a sincere apology and a flippant “deal with it” within a day, you’d call them insincere at best or mentally fragmented at worst. When the source is an LLM, however, those posts shouldn’t be treated as authoritative statements. LLMs like Grok are highly unreliable as sources, assembling phrasing that often mirrors what the prompt expects rather than reflecting anything like coherent human intent or judgement.

You may also like

Leave a Comment