A troubling undressing spree by Grok AI as users warned of legal consequences
This photo illustration shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by US artificial intelligence company xAI, in Toulouse, southern France, on January 15, 2025. (Photo by Lionel BONAVENTURE / AFP)
Audio By Vocalize
In the troubling trend, X users made prompts asking Grok to manipulate images, mostly of women and girls, by undressing them or displaying them in Bikinis.
What started as a ploy to use Grok to transform images has raised concerns about sexual abuse of children and Technology-facilitated Gender-based violence (TFGBV).
From “remove her clothes” to “put her in a bikini”. Grok responded to such prompts to undress people in photos that were shared on X. One such instance was a case where Grok was prompted to remove clothes from the picture of a 14-year-old girl.
After mass uproar, the platform removed the sexually explicit images generated by its built-in chatbot. Other than images, Grok’s Imagine feature is also widely used to digitally manipulate images and create short videos from them.
Last Friday, an analysis by Reuters found that Grok had complied with at least 21 requests to generate images of women in translucent bikinis or stripping their clothes.
The publication further found that in a 10-minute-long period on Friday, Grok received 102 requests to manipulate images of people so that they would appear to be wearing bikinis.
This also targeted platform owner Elon Musk, who, in a seemingly welcoming attitude, prompted the chatbot to create an image of himself in a bikini. Musk would also react to an image where the user placed Microsoft founder Bill Gates in a bikini.
Later, in a January 3 post, Musk warned that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
On the other hand, X’s safety section warned that the community safety policies will be applied in dealing with the perpetrators of the offences.
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety stated.
In its safety rules, X prohibits targeted harassment and spells “zero tolerance for any forms of child sexual exploitation.” The platform also prohibits the non-consensual sharing of intimate photos or videos of someone.
“We prohibit unwanted sexual conduct and graphic objectification that sexually objectifies an individual without their consent,” X states in its platform policy.
Making content less visible, post removal, and account suspension are among the actions X takes against perpetrators of the offences. The platform can also respond to legal requests to take down such content. However, it could take time for these safeguards to be implemented. In the latest incident, Grok admitted to lapses in the implementation of AI safety protocols.
"There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced. xAI has safeguards, but improvements are ongoing to block such requests entirely,” the chatbot stated, later turning down multiple undressing results.
I prompted Grok to find out the actions it is taking to prevent Child Sexual Abuse Material (CSAM) and TFGBV. The chatbot said xAI has “urgently fixed and tightened guardrails to block such requests more effectively, hidden certain media features temporarily to limit misuse and encouraged reporting of violations.”
However, on Monday, I came across a prompt asking Grok to manipulate an image of two African presidents, depicting them in Bikini's. The chatbot granted this in its response on January 4, 2026.
A case of Grok on the loose depicts the CSAM and TFGBV crisis arising from the development of multiple AI models. AI tools have been weaponized to create deepfakes designed to shame people. Women and girls are vulnerable to these acts of violence.
The 2025 16 Days of Activism was dedicated to ending digital violence against all women and girls.
UN Women reported that out of 90-95% deepfake images circulated on digital media platforms are sexual images of women.
While AI solutions can be a force in achieving gender equality, their misuse has been found to harm women, through creating new forms of abuse and bias, while amplifying existing ones.


Leave a Comment