Even Cory Doctorow — the man who coined the term “Enshittification” — probably wouldn’t have imagined a future where social media platforms would casually normalise AI-generated porn and deepfakes. Yet here we are.
A pattern has started to emerge across major platforms like Meta and X as the duo aggressively roll out generative AI features designed to let users create AI-generated videos frictionlessly. They tout these features as promising creativity and accessibility, but they risk accelerating the internet’s slide into something far darker — including deepfake porn and an overwhelming flood of synthetic content.
In “Enschittification 2.0,” social media has entered a new low. After degrading their services to exploit users for profit, these platforms are starting to become a cesspool of AI-generated slop and pornographic content.
Meta’s video-generation feature, Vibes, lets users quickly remix AI-generated clips and push them across Instagram and Facebook. It doesn’t require much creativity — just a few prompts to generate a short video that can be shared instantly.
Recently, this AI feature has reportedly been used to create sexually explicit content involving AI-generated depictions of children and Bollywood actors. An AI-generated video allegedly showing a prominent (now deceased) political personality groping a woman’s breast has also circulated on the Meta AI app.
Generative AI video tools allow a single user to produce hundreds of videos per day. And Meta’s algorithms already privilege high engagement and high posting frequency.
Combine the two, and you have a feed that can quickly become dominated by synthetic media rather than human-created content.
Elon Musk–owned X, on the other hand, presents an even more troubling case. The platform’s AI assistant, Grok, recently added features allowing users to animate images or convert them into short videos.
Users quickly discovered that the tool could be used to create non-consensual sexualised deepfakes, including videos depicting women — and sometimes minors — in explicit or suggestive scenarios.
Critics on X have pointed out that many of these videos are generated instantly and publicly, meaning they can spread before moderation even notices them.
And if this wasn’t enough, Mr. Musk recently posted on X :“If it’s allowed in an R-rated movie, it’s allowed in @Grok Imagine.”
This suggests that the platform’s AI chatbot is capable of generating adult material, including intense violence, strong language, sexual content, or drug abuse — content that in films typically requires viewers under 17 to be accompanied by a parent or guardian.
Deepfake porn has existed for years. But generative AI tools integrated directly into massive social platforms dramatically lower the barrier to entry. What once required specialised software now requires little more than a prompt and a few seconds.
And moderation on these platforms has increasingly appeared ineffective. It often reacts only after harmful content spreads, and rarely prevents such material from going viral in the first place.
For example, AI videos are often labelled only after detection, not at the moment of creation. Deepfake safeguards depend heavily on reporting rather than automatic blocking. Policies struggle to keep up with how quickly generative tools evolve.
In other words, the moderation model still assumes human content creation at human speed. AI-generated content operates far faster.
Beyond the immediate harms — harassment, misinformation, and deepfake porn — there’s a more structural concern: trust.
Social media feeds were already struggling with trust. Now any video could be synthetic, any person could be animated into a fake scenario, and the platform itself may actively encourage AI content production.
This risks eroding the authenticity of visual evidence itself. When everything might be fake, verification becomes difficult and trust begins to disappear.
Doctorow’s original theory of Enshittification described how platforms decay over time. First they serve users. Then they serve business customers. Finally they extract value primarily for themselves.
Now, with generative AI, we may be descending into another phase — one where platforms mass-produce synthetic content and push users toward increasingly low-quality engagement.
This could lead to a broader cultural problem that must be addressed early. Otherwise, the social layer of the internet could begin to rot.
Before it gets to that stage, these platforms should be required to mandatorily watermark AI-generated videos, visibly and permanently. They should also block the creation of AI-generated content depicting identifiable real people without consent. Finally, they should build systems that introduce safety and moderation checks at the moment content is created — not after it spreads.
Published – March 12, 2026 03:00 pm IST