Sample of roughly 500 posts shows how frequently people are creating sexualized images with Elon Musk’s AI chatbot
New research that samples X users prompting Elon Musk’s AI chatbot Grok demonstrates how frequently people are creating sexualized images with it. Nearly three-quarters of posts collected and analyzed by a PhD researcher at Dublin’s Trinity College were requests for nonconsensual images of real women or minors with items of clothing removed or added.
The posts offer a new level of detail on how the images are generated and shared on X, with users coaching one another on prompts; suggesting iterations on Grok’s presentations of women in lingerie or swimsuits, or with areas of their body covered in semen; and asking Grok to remove outer clothing in replies to posts containing self-portraits by female users.
Among hundreds of posts identified by Nana Nwachukwu as direct, nonconsensual requests for Grok to remove or replace clothing, dozens reviewed by the Guardian show users posting pictures of women including celebrities, models, stock photos and women who are not public figures posing in snapshots.
Several posts in the trove reviewed by the Guardian have received tens of thousands of impressions and come from premium, “blue check” accounts, including accounts with tens of thousands of followers. Premium accounts with more than 500 followers and 5m impressions over three months are eligible for revenue-sharing under X’s eligibility rules.













