A flood of non-consensual deepfake bikini shots on X is putting the UK’s Online Safety Act to the test

T

he unleashing on X (formerly Twitter) of a torrent of AI-generated images of women and children wearing bikinis, some in sexualised poses or with injuries, has rightly prompted a strong reaction from UK politicians and regulators. Monday’s announcement that X is being investigated was Ofcom’s most combative move since key provisions in the Online Safety Act came into force. None of the other businesses it has challenged or fined have anything like the global reach or political clout of Elon Musk’s social media giant. Whatever happens next, this is a defining moment. What is being defined is the extent to which some of the wealthiest companies on the planet are under democratic control.

But the announcement is only a first step. Ofcom has given no indication of how long its investigation will take. On Friday Downing Street described as insulting the decision to limit the use of the image‑making Grok AI chatbot to X’s paying subscribers. The government said that this amounted to turning the creation of abusive deepfakes into a “premium service”.

Such robust language was welcome. So was the announcement by the technology secretary, Liz Kendall, that a promised ban on the creation of non‑consensual intimate images will come into force this week, and nudification apps will be outlawed quickly. At the weekend David Lammy claimed that JD Vance shares the UK government’s objection to tools that enable users to undress children in photographs. Clearly, ministers do not want a fight with Donald Trump and would prefer US politicians to get on board with a challenge to big tech over image-based abuse. But Mr Musk’s aggressive opposition to regulation may make a public battle inevitable. He wants Grok to be competitive with OpenAI’s ChatGPT. And sex sells.