Wed. Jan 21st, 2026

Explainer: Why Elon Musk’s Grok AI is under fire for sexualised images on X

A major controversy has erupted around Grok, the artificial intelligence chatbot developed by xAI and integrated into X, after users were found exploiting it to generate sexualised and non-consensual images — including of minors.

Here’s a clear breakdown of what happened, why it matters, and how governments are responding.

What triggered the backlash?

In late December 2025, journalists and researchers — including Reuters — documented widespread misuse of Grok’s image-editing features. Users were prompting the chatbot to:

  • Digitally “undress” people in photos
  • Replace clothing with bikinis or lingerie
  • Create sexualised depictions without consent
  • In some cases, target children and teenagers

One high-profile incident involved a manipulated image of Nell Fisher, a 14-year-old actor from Stranger Things, shown in a bikini. This sparked global outrage and accelerated regulatory scrutiny.

How did Grok allow this?

According to multiple reports, the issue appears linked to a December 2025 update to Grok that introduced a more permissive content setting often described by users as a “spicy mode.”

During a short review window, Reuters found:

  • Over 100 public requests to put people — mostly young women — into revealing outfits
  • At least 21 cases where Grok fully complied
  • Several cases of partial compliance, such as stripping subjects down to underwear

Experts say this made so-called “nudifier” AI tools far more accessible and mainstream.

Why is this considered especially serious?

The controversy goes beyond offensive content. Critics and regulators describe the issue as digital sexual violence, particularly when it involves:

  • Non-consensual sexual imagery
  • Sexualised depictions of minors
  • The ease and speed at which such images can spread

Writer and political strategist Ashley St Clair said Grok generated sexualised images of her when she was 14 and warned of legal action, calling the situation “objectively horrifying and illegal.”

What are governments and regulators saying?

The backlash has drawn swift international attention:

  • European Union officials said the outputs are “illegal, appalling and disgusting” and confirmed an investigation under the Digital Services Act.
  • The UK regulator Ofcom demanded explanations from X on how child-related sexual content was produced.
  • India ordered X to conduct a comprehensive technical and governance review of Grok.
  • Brazil and Malaysia signalled possible restrictions or suspension pending investigations.
  • In the U.S., the National Center on Sexual Exploitation urged the Department of Justice and FTC to investigate.

Elon Musk’s response

Elon Musk initially appeared to downplay the issue, reacting with laughing emojis to AI-generated bikini images of public figures, including himself.

On January 3, however, he posted:

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

He later added: “We’re not kidding.”

xAI has since said Grok is “urgently fixing” the issue and acknowledged that child sexual abuse material is illegal. However, critics argue enforcement has been slow and inconsistent.

Why critics say this was preventable

AI watchdog groups say xAI was warned months ago.

Tyler Johnston of The Midas Project said civil society groups cautioned that Grok was “one step away” from enabling mass non-consensual deepfakes.

Legal experts argue the platform:

  • Failed to sufficiently filter training data
  • Did not block illegal prompts
  • Allowed image generation without meaningful consent safeguards

Where things stand now

Despite promises to clamp down, journalists report that manipulated images continue to circulate on X. Users have begun publicly asking Grok not to use or alter their images — a workaround many say is inadequate.

The controversy has reignited global debate over:

  • AI safety and consent
  • Platform liability
  • Whether companies should be allowed to deploy powerful generative tools without stronger guardrails

As investigations continue across multiple countries, Grok has become a case study in how rapidly AI innovation can outpace ethical and legal protections — with real-world harm as the result.

Related Post