Big tech companies cannot be trusted. It is not enough that they remove harm when they find it – the law must make their systems prevent harm

O

n X, a woman posts a photo in a sari, and within minutes, various users are underneath the post tagging Grok to strip her down to a bikini. It is a shocking violation of privacy, but now a familiar and commonplace practice. Between June 2025 and January 2026, I documented 565 instances of users requesting Grok to create nonconsensual intimate imagery. Of these, 389 were requested in just one day.

Last Friday, after a backlash against the platform’s ability to create such nonconsensual sexual images, X announced that Grok’s AI image generation feature would only be available to subscribers. Reports suggest that the bot now no longer responds to prompts to generate images of women in bikinis (although apparently will still do so for requests about men).

But as the technology secretary, Liz Kendall, rightly states, this action “does not go anywhere near far enough”. Kendall has announced that creating nonconsensual intimate images will become a criminal offence this week, and that she will criminalise the supply of nudification apps. This is appropriate, given X’s weak response. Placing the feature behind a paywall means that the platform can more directly profit off the online dehumanisation and sexual harassment of women and minors. And stopping the “bikini” responses after public censure and the threat of legislation is the least X can do – the bigger question is why it was even possible in the first place.