How artificial intelligence is fuelling a disturbing new surge in violence against women and girls

Deepfake abuse – the creation and dissemination of non-consensual sexually explicit media using artificial intelligence (AI) technologies – is on the rise. The advanced AI tools enable perpetrators to produce hyper-realistic images of a sexual or intimate nature by imposing the face of one person on to the body of another, making victims appear to say and do things without their knowledge or consent.

The widespread availability and accessibility of AI has fuelled an unprecedented surge in sexually explicit synthetic media. Between 2019 and 2023, the volume of pornographic deepfakes online grew by 1,780%, with views increasing by 3,042% in that time. This phenomenon is overwhelmingly gendered, with 99% of pornographic deepfakes targeting girls and women.

The rise of abusive synthetic media reflects a broader epidemic of technology-facilitated gender-based violence. As Rebecca Hitchen, head of policy & campaigns at the End Violence Against Women Coalition (EVAW), tells me, these technologies foster environments that are conducive to harm, reinforcing and exacerbating patterns of abuse that exist in ‘offline’ contexts.

“Technology and the internet are not ‘neutral’,” she says. “In an attention economy, tech companies know that more polarising content can generate more views, and therefore more data and, ultimately, greater profits. This means that algorithms drive particular types of content to young men and boys, influencing their views and actions.”

Algorithmic ‘radicalisation’ – reflecting how platform architectures fuel and amplify extremist ideologies – presents a growing and urgent threat. A 2024 study of algorithmic recommendations on TikTok and YouTube Shorts, from the Dublin City University, found that, on average, platforms promote toxic content featuring male supremacist and anti-feminist beliefs within the first 23–26 minutes of use. After just two to three hours of viewing, this content makes up the vast majority (between 76–78%) of recommended material. This content is actively shaping the attitudes of young people, with a 2025 King’s College London survey revealing that 57% of gen Z men think that women’s equality has gone “too far”.

AI generated sexually explicit images are intended to humiliate and demean women and girls. As Rebecca explains, they follow a pattern of coercive control underpinned by male power and entitlement, serving to keep women ‘in their place’ and restrict their freedom of expression.

Deepfake abuse is a profound violation of survivors’ privacy and autonomy, with devastating consequences for those targeted. According to a 2024 analysis by campaign group #MyImageMyChoice, survivors – who often do not know who the perpetrator is – experience a severe erosion of trust in those around them. This distrust can be hugely isolating, causing them to withdraw from social life, and leave school or their careers due to fears of further abuse and victimisation.

Quotes for Ghost – templates (3) copy.jpg

Despite the known prevalence and impact of image-based abuse, incidents have received little media attention. This reflects how girls’ and women’s safety continues to be deprioritised – a systemic oversight that abusers rely upon and exploit.

Deepfakes in a landscape of internet misogyny

The upsurge in abusive deepfakes goes hand in hand with an online climate where misogyny is increasingly normalised, reflected in the expansion and mainstreaming of the ‘manosphere’ – an ecosystem of internet communities united by expressions of hostile sexism and toxic masculinity.

In this climate, deepfake abuse has become a lucrative business, reflected in the recent explosion of easy-to-use ‘nudify’ apps and websites that use AI to undress photos of women. An analysis by WIRED magazine showed many of these apps use single sign-in systems powered by tech companies such as Google, Apple, and Discord, providing more convenient routes to the creation and distribution of non-consensual intimate images.

This is just one of many examples of how tech giants are propping up abusive activities. According to #MyImageMyChoice, Google drives 68% of traffic to the top 40 websites dedicated to deepfake abuse. Research by Graphika also found a 2,000% increase in links to deepfake websites on platforms such as Reddit and X in 2023. Now, with X owner Elon Musk recently sharing a sexist theory about ‘high status males’, and rollbacks on fact-checking and content moderation on Meta, platforms are sending a clear message that girls’ and women’s safety does not matter.

A YouGov poll carried out on behalf of EVAW in February found that 52% of people believe the internet has become more dangerous for women and girls in the past 12 months, a figure that rises to 58% when polling women. This reality is reflected in skyrocketing cases of technology-facilitated abuse and harassment, with 77% of girls and young women aged seven to 21 having experienced online harm in the last year – a 100% increase since 2021.

While women in the public eye are targeted on a massive scale, ordinary women are also vulnerable, and often have fewer resources and support available when seeking justice.

What protections exist for survivors of deepfake abuse?

Alongside survivor-led campaigning group #NotYourPorn, Professor Clare McGlynn, a legal expert at Durham University, survivor/campaigner Jodie*, and Glamour UK, EVAW are campaigning for an image-based abuse law to address non-consensual sexually explicit deepfakes.

The campaign achieved a significant boost in January this year when the UK government announced its plans to criminalise the creation of non-consensual sexually explicit deepfakes. The coalition also welcomed a government U-turn to make the offence consent-based, rather than requiring proof of malicious intent.

“Consent is the only relevant factor in sexual offending,” Rebecca Hitchen tells me. “Requiring evidence of intent puts an unreasonable burden on survivors to prove motive, and provides a loophole for those who claim they did not mean to cause harm with their actions.”

The coalition is now calling for the government to create civil routes to justice, allowing survivors to be able to obtain a court order requiring the removal of images from a platform or perpetrator’s device moving forward.

In addition, the group wants to see platforms held accountable for hosting and profiting from image-based abuse through stronger regulation of tech companies, and the appointment of an online commissioner to advocate for survivors’ interests.


💭
Where can I find help?

If you, or someone you know, have been a target of deepfake abuse, help is available.

Revenge Porn Helpline (revengepornhelpline.org.uk) is a free service that assists survivors in getting non-consensual images removed from platforms. It also offers advice for reporting crimes and accessing legal advice.

If the images shared include someone under the age of 18, Take It Down (takeitdown.ncmec.org) offers support and resources to help protect young people from further harm.

For broader support, Refuge Tech Safety (refugetechsafety.org) provides guidance and resources on digital security and protecting yourself online.

*Name changed to protect survivor’s identity