Home News How Grok pushed deepfake “nudification” mainstream

How Grok pushed deepfake “nudification” mainstream

0

Welcome to this year’s first instalment of the Tech newsletter! In this issue, we introduce an audience poll question at the end of the newsletter to hear your views on the topics we cover.

Elon Musk’s Grok chatbot spent the first week of 2026 not as a quirky “spicy” AI assistant, but as the centre of a global scandal over sexualized deepfakes of women and minors. What started as a viral trick for “AI undressing” on X has rapidly turned into parallel investigations across Europe, India, Malaysia, and the UK, and a stress test for how regulators handle AI‑generated abuse at platform scale.

Grok is Mr. Musk’s AI assistant built into X, with a paid “Spicy Mode” and, more recently, powerful image tools branded as Grok Imagine and an “edit image” button. In late December, X quietly rolled out the ability for any user to take an existing photo on the platform and ask Grok to modify it with text prompts, including removing or altering clothing on real people who had never consented to such use of their images.

Within days, women began posting that strangers had used Grok to generate fake nudes and sexualized edits of their public photos, sometimes placing them in sheer bikinis or explicitly suggestive poses. Journalists and researchers then showed that Grok would often comply even when prompts referenced minors, including a widely cited case of a 14‑year‑old actress, crossing directly into the territory of child sexual abuse material (CSAM).

New tipping point

Although non‑consensual deepfakes have circulated on X and other platforms for years, this week marked a structural shift. That’s because this abuse just got built into X as feature, not just something happening in the shadows. On public timelines, users openly tagged Grok under photos with prompts to “undress” women or “make this girl’s clothes see‑through,” with AI‑generated results visible to millions.

Safety researchers and advocacy groups argued this crossed a crucial line as X knew or should have known that “nudification” tools are high‑risk for non-consensual intimate image (NCII) abuse and CSAM. Yet, the feature was shipped as default. Also, this feature works on any photo on X. That means any user’s image can be potentially abused. Furthermore, victims had little recourse as images spread through quote‑tweets and reposts faster than they could be reported or removed.

Government and regulator pile‑on

The regulatory response has cascaded almost day by day. In Europe, the European Commission has said it is “seriously looking into” Grok over “appalling” and “repulsive” child‑like deepfakes, stressing that this is not “spicy” content but likely illegal under EU law. Separately, France has expanded an existing criminal probe into X to explicitly include allegations that Grok is being used to generate and disseminate child pornography.

India’s IT ministry (MeitY) has directed X to remove sexualized content, act against offending accounts, and file an “Action Taken” report within 72 hours, warning of legal consequences if it failed to comply. The order also demanded a “thorough technical, procedural, and governance review” of Grok’s safeguards.

The Malaysian Communications and Multimedia Commission has announced an investigation and said it would summon X representatives, arguing that all platforms must align AI and image‑manipulation features with national online safety rules.

UK’s regulator made “urgent contact” with X and xAI after reports that Grok was generating “undressed images of people and sexualised images of children.” The regulator has requested details on how X is meeting its legal duties under the UK’s new online safety regime and promised a “swift assessment” of potential breaches.

Australia, New Zealand and other markets have also amplified pressure, framing Grok as a test case for whether countries will actually enforce online safety and CSAM laws against AI tools embedded in major platforms.

In effect, Grok has turned into the first high‑profile instance where AI “nudification” triggers synchronized regulatory scrutiny across multiple jurisdictions in a single week.

Mr. Musk and X’s posture

X has conceded that there were “safeguard lapses,” acknowledging that Grok Imagine had at times produced sexualized images of minors and children in minimal clothing. Official policies now reiterate bans on pornographic depictions of identifiable individuals and any sexual content involving minors, and X has warned users that asking Grok for illegal material carries the same consequences as uploading it directly.

At the same time, Mr. Musk’s public posture has been defiant and flippant. He reposted Grok‑generated images, including one of himself in a bikini, with laughing emojis even as regulators were denouncing the abuse as illegal child sexual exploitation. His core argument is that the problem lies with “bad users,” not the AI system design, insisting that those who misuse Grok will face enforcement while championing the chatbot’s edgy, unfiltered brand.

This partial technical backpedaling paired with culture‑war signaling is fueling the sense that Grok’s guardrails are reactive, politically inflected, and not robust enough for high‑risk use cases like image editing of real people.

For now, Grok remains live on X, regulators are still gathering facts, and victims are still trying to get images taken down. The coming weeks will show whether this controversy ends in fines and incremental tweaks, or in a more fundamental reset of how AI image tools are allowed to operate on mainstream platforms.

Written by John Xavier

Published – January 08, 2026 01:19 pm IST

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here