Technology reporter

Elon Musk’s AI video generator has been accused of making “a deliberate choice” to create sexually explicit clips of Taylor Swift without prompting, says an expert in online abuse.
“This is not misogyny by accident, it is by design,” said Clare McGlynn, a law professor who has helped draft a law which would make pornographic deepfakes illegal.
According to a report by The Verge, Grok Imagine’s new “spicy” mode “didn’t hesitate to spit out fully uncensored topless videos” of the pop star without being asked to make explicit content.
The report also said proper age verification methods – which became law in July – were not in place.
XAI, the company behind Grok, has been approached for comment.
XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”.
“That this content is produced without prompting demonstrates the misogynistic bias of much AI technology,” said Prof McGlynn of Durham University.
“Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to,” she added.
This is not the first time Taylor Swift’s image has been used in this way.
Sexually explicit deepfakes using her face went viral and were viewed millions of times on X and Telegram in January 2024.
Deepfakes are computer-generated images which replace the face of one person with another.
‘Completely uncensored, completely exposed’
In testing the guardrails of Grok Imagine, The Verge news writer Jess Weatherbed entered the prompt: “Taylor Swift celebrating Coachella with the boys”.
Grok generated still images of Swift wearing a dress with a group of men behind her.
This could then be animated into short video clips under four different settings: “normal”, “fun”, “custom” or “spicy”.
“She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed,” Ms Weatherbed told BBC News.
She added: “It was shocking how fast I was just met with it – I in no way asked it to remove her clothing, all I did was select the ‘spicy’ option.”
Gizmodo reported similarly explicit results of famous women, though some searches also returned blurred videos or with a “video moderated” message.
The BBC has been unable to independently verify the results of the AI video generations.
Ms Weatherbed said she signed up to the paid version of Grok Imagine, which cost £30, using a brand new Apple account.
Grok asked for her date of birth but there was no other age verification in place, she said.
Under new UK laws which entered into force at the end of July, platforms which show explicit images must verify users’ ages using methods which are “technically accurate, robust, reliable and fair”.
“Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act,” the media regulator Ofcom told BBC News.
“We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks,” it said in a statement.
New UK laws
Currently, generating pornographic deepfakes is illegal when used in revenge porn or depicts children.
Prof McGlynn helped draft an amendment to the law which would make generating or requesting all non-consensual pornographic deepfakes illegal.
The government has committed to making this amendment law, but it is yet to come into force.
“Every woman should have the right to choose who owns intimate images of her,” said Baroness Owen, who proposed the amendment in the House of Lords.
“It is essential that these models are not used in such a way that violates a woman’s right to consent whether she be a celebrity or not,” Lady Owen continued in a statement given to BBC News.
“This case is a clear example of why the Government must not delay any further in its implementation of the Lords amendments,” she added.
A Ministry of Justice spokesperson said: “Sexually explicit deepfakes created without consent are degrading and harmful.
“We refuse to tolerate the violence against women and girls that stains our society which is why we have passed legislation to ban their creation as quickly as possible.”
When pornographic deepfakes using Taylor Swift’s face went viral in 2024, X temporarily blocked searches for her name on the platform.
At the time, X said it was “actively removing” the images and taking “appropriate actions” against the accounts involved in spreading them.
Ms Weatherbed said the team at The Verge chose Taylor Swift to test the Grok Imagine feature because of this incident.
“We assumed – wrongly now – that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they’ve had,” she said.
Taylor Swift’s representatives have been contacted for comment.