top of page

Deepfakes, Women and Regulation: The Growing Threat of AI-Enabled Abuse

  • 1 day ago
  • 5 min read

This article discusses sexually explicit materials and online violence against women. Reader discretion is advised. Support resources are linked at the end of the article.


Author: Aimen Khan


The rapid expansion of generative artificial intelligence (AI) in recent years has raised serious ethical, social and legal concerns. Among concerns about worsening climate change and cognitive decline, one of the most pressing is the use of AI tools to create exploitative and non-consensual content, disproportionately targeting women. Generative AI platforms such as chatbots and image generation systems have made it easier to manipulate images, clone voices and produce deepfake content at scale. One platform that has drawn particular criticism is Grok, the AI system developed by xAI and integrated into X, a platform owned by Elon Musk.

 

Photo illustration: UN Women/Ryan Brown
Photo illustration: UN Women/Ryan Brown

The platform has faced significant backlash over concerns about how it has been used. Reports indicate that Grok has been used to generate explicit sexual content, reportedly operated with weaker moderation safeguards and has also faced allegations of political bias. Users reportedly exploited the chatbot by uploading photographs of real women and prompting it to generate altered images that appeared to remove their clothing. In an analysis conducted for The Guardian, as of January 8, 2026, as many as 6,000 bikini-related prompts were being made to the chatbot every hour. The same report also found that images of teenage girls and children were being altered into sexualized depictions. A lot of this content could reasonably be categorized as child sexual abuse material but nonetheless remained visible on the platform. This raised serious concerns about safety, consent and the platform’s moderation standards. Musk further drew criticism after appearing to make light of the controversy by posting a prompt asking Grok to generate an image of him in a bikini.

 

The misuse of the chatbot was not limited to the creation of sexualized image manipulation. It was also reportedly used to generate violent and degrading content involving real individuals. In one widely criticized case, users asked the system to add bullet holes to the face of Renee Nicole Good, a Minneapolis woman who was killed by an officer with the U.S. Immigration and Customs Enforcement earlier this year. Following the backlash, Grok limited some of its image-generation features to subscribers, though this did little to address broader concerns about harm and accountability. The abuse also affected women in more personally targeted ways. Ashley St. Clair, a writer and political commentator who shares a child with Musk, described feeling “horrified and violated” after manipulated images of her as a child circulated online. She also suggested that the abuse functioned as a form of punishment after she criticized Musk and his platform. Cases like these show that AI-enabled exploitation is affecting many different kinds of victims, from ordinary women to public figures to children, while spreading across platforms with alarming speed and frequency.

 

The rise of AI-enabled abuse builds on broader patterns of online harassment that have expanded across digital platforms over the past decade. However, generative AI has intensified these harms by making exploitation faster, cheaper, and harder to trace and regulate. These forms of abuse include non-consensual deepfake pornography, sexualized altered images of real women, voice cloning used for harassment and AI systems used to generate degrading or exploitative content. Although many people can be targeted by digital abuse, women, especially those who are public figures, are disproportionately affected. Reports have found that around 98% of reported deepfakes are pornographic, and that 99% of those targeted are women. In this sense, deepfakes are not simply a new technological trend, but an increasingly powerful tool of gender-based violence that legal and policy frameworks have struggled to address.

 

There have also been further concerns about how Grok responded to requests involving highly sensitive images linked to the release of the Epstein files. Between January 30 and February 5, reviewers found multiple instances in which users asked the chatbot to “unblur” or identify women and children shown in the images. Grok reportedly generated responses to most of these requests, further highlighting weaknesses in the platform’s safeguards. The system responding to prompts involving vulnerable individuals in such a sensitive context intensified concerns about whether existing moderation measures were sufficient to prevent exploitative use.

 

Supporters of Musk have defended Grok by arguing that it offers fewer restrictions and resists what they see as over-moderation while responding more directly to user prompts. From this perspective, looser safeguards are presented as a way to preserve openness and allow the chatbot to respond more freely to user prompts. However, this defence overlooks the real risks that come with weaker moderation. When platforms prioritize unrestricted output over safety, they make it easier for users to generate abusive, exploitative, or degrading content involving real people.

 

In response to public criticism and growing regulatory pressure in the United Kingdom and European Union, xAI began introducing changes to some of Grok’s features. Meanwhile, governments have started developing legal frameworks to address AI-related harms. In April 2021, the European Commission proposed the EU’s first comprehensive AI law, based on different levels of risk posed by AI systems. In the United Kingdom, the Online Safety Act, passed in 2023, made it illegal to share certain digitally manipulated explicit images or videos. In the United States, California’s Transparency in Frontier AI Act, passed last year, represents an early attempt to regulate advanced AI systems more directly. Together, these measures reflect a growing recognition that AI harms can no longer be left entirely to platform self-regulation.

 

Some AI companies have also attempted to address these concerns through internal safety frameworks. Earlier this year, Anthropic, for example, presented Claude’s Constitution, a document outlining the principles intended to guide the model’s behaviour, values, and decision-making. Measures such as these suggest that some firms are trying to build ethical safeguards directly into their models. However, company-led initiatives remain a form of self-regulation and are not a substitute for clear and enforceable legal standards.

 

Even with growing regulation and internal safety frameworks, major gaps in protection remain. Some laws, for example, do not clearly prohibit the creation of pornographic deepfakes, and legal protections may be limited when intent to cause distress cannot be proven. More broadly, current legal frameworks remain fragmented and inconsistent across jurisdictions. Cross-border enforcement is often weak, and platform liability continues to vary significantly. As a result, many victims of AI-generated abuse are left without clear or effective forms of protection.

 

AI systems often reflect and amplify existing gender inequities that have long shaped technological design, data collection, and structures of power. The rise of deepfakes and other forms of AI-enabled abuse shows that innovation without accountability can intensify already existing harms against women. For this reason, stronger legal protections, clearer platform responsibility, and more gender-sensitive AI governance are necessary to ensure that emerging technologies do not continue to undermine women’s rights and safety.




If you or someone you know is experiencing online harassment or AI-generated abuse, support is available. You can report immediate safety concerns to local police, or report sexual exploitation and child victimization through Cybertip.ca. For guidance on protecting your digital security, visit Get Cyber Safe. Support services are also available through Kids Help Phone (for youth) and NeedHelpNow.ca for help with non-consensual image sharing and removal. If you are in immediate danger, call 911. 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page