The Internet Watch Foundation has sounded the alarm over a dramatic increase in AI-generated videos depicting child sexual abuse, with the majority falling into the worst category of abuse.
The Guardian reports that the proliferation of AI-generated child sexual abuse material (CSAM), commonly known as child pornography, has reached unprecedented levels. according to the latest findings from the Internet Watch Foundation (IWF). The UK-based internet safety watchdog reported a staggering surge in AI-made videos featuring child sexual abuse, with 1,286 videos verified in the first half of 2025 alone, compared to just two in the same period last year.
The IWF expressed grave concern over the increasing sophistication of these AI-generated videos, noting that they have “crossed the threshold” of being nearly indistinguishable from real abuse imagery. Shockingly, just over 1,000 of the verified videos depicted category A abuse, the most severe classification of child pornography.
Analysts at the IWF attribute this alarming trend to the multibillion-dollar investments pouring into the AI industry, resulting in widely available video-generation models that are being exploited by pedophiles. The ease of access to these tools and the rapid advancements in AI technology have created a fertile ground for perpetrators to create and disseminate CSAM.
The IWF’s findings reveal that pedophiles are actively discussing and sharing techniques to manipulate AI models for their nefarious purposes on dark web forums. By fine-tuning freely available AI models with a small number of real CSAM videos, these individuals are able to produce disturbingly realistic abuse videos.
Derek Ray-Hill, the interim chief executive of the IWF, warned of the “incredible risk” posed by AI-generated CSAM, expressing fears that it could lead to an “absolute explosion” of such content on the clear web. He emphasized that the growth in AI-made child porn could fuel criminal activities related to child trafficking, sexual abuse, and modern slavery.
The UK government has responded to this growing threat by introducing new legislation to combat AI-generated CSAM. Under the proposed laws, individuals found to possess, create, or distribute AI tools designed to create abuse content could face up to five years in prison. Additionally, possessing manuals that teach how to use AI tools to make abusive imagery or abuse children will carry a potential sentence of up to three years.
In the United States, 38 states have laws against AI-generated child porn, while 12 states and Washington DC have not yet taken action.
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.