UK and EU pressure force X to limit Grok image generation to paid users

X restricted Grok's image generator to Premium tiers after users created suggestive deepfakes. Regulators like Ofcom and the EU warned of fines under new safety laws, prompting a shift to a paywalled, high-friction model.

author-image
Punam Singh
New Update
Grok AI leak
Listen to this article
0.75x1x1.5x
00:00/ 00:00

X has moved its Grok AI image generation tools behind a subscription paywall following a week of intense international criticism. The decision follows a viral trend where users used the tool to create suggestive and non-consensual images of female politicians, journalists, and celebrities.

Advertisment

Global regulators argued that the lack of strict guardrails on the platform turned the AI tool into a "harassment engine." And now by restricting access to Premium and Premium+ tiers, X aims to reduce the volume of generated content while adding a layer of accountability to its user base.

The "Bikini Scandal" and public outcry

The controversy peaked in the first week of January 2026. Users exploited Grok’s "unfiltered" prompts to generate realistic images of public figures in bikinis and other suggestive attire. These images spread rapidly across the platform, often appearing in the replies of the actual individuals depicted.

It was noted that while other AI companies like Google and OpenAI have hard blocks on generating images of real people in suggestive contexts, Grok’s filters were easily bypassed. This led to a coordinated protest from digital rights groups and high-profile female leaders who labeled the technology as a weapon for "digital gender-based violence."

Advertisment

Global government backlash

The reaction from world governments was immediate and focused on platform safety:

  • United Kingdom: The Home Office and the communications regulator, Office of Communications, issued a joint warning. They cited the Online Safety Act, which mandates that platforms proactively remove "illegal and harmful content," including non-consensual deepfakes.

  • European Union: The European Commission signaled that X could face fines under the Digital Services Act (DSA). Regulators argued that X failed to mitigate the "systemic risk" of AI-generated harassment and misinformation.

  • Australia: The eSafety Commissioner demanded that X provide a detailed report on the safety protocols used for Grok. The Commissioner warned that "unfiltered AI" cannot be a shield for violating domestic safety laws.

Elon Musk’s Response

Elon Musk initially pushed back against the criticism, claiming the platform prioritises "maximum truth" and user autonomy. He argued that the responsibility for content rests with the creator, not the tool. Musk also pointed to Community Notes as the primary defense against misleading AI content.

However, as the threat of massive fines under the DSA loomed, Musk’s tone shifted. The platform implemented a "paywall strategy," moving the image generation feature to paid tiers. This move serves two purposes: it creates a financial barrier to mass-producing images and ensures that every creator has a verified billing identity attached to their account.

Regulatory limitations and the shift

Moving Grok to a subscription-only model allows X to argue that it is exercising "reasonable friction." Regulators in the UK and EU are currently debating whether a paywall is a sufficient safety measure.

Under current frameworks, platforms must implement "provenance standards," such as digital watermarks, to identify AI-produced media. While Grok images now carry metadata indicating they are AI-generated, regulators continue to investigate whether X’s internal filters are strong enough to prevent the creation of non-consensual deepfakes entirely. The controversy marks a turning point in the debate over how much freedom AI models should have when interacting with the identities of real people.