Connect with us

Hi, what are you looking for?

Technology

AI-Generated Child Sexual Abuse Material Surges, Internet Watch Foundation Warns

Artificial intelligence-generated imagery depicting the sexual abuse of children increased by 14 percent in 2025, raising alarms over a new and growing threat, according to a report from the Internet Watch Foundation (IWF). The British non-profit, which works to remove child sexual abuse material (CSAM) from the internet, identified more than 8,000 AI-generated images and videos in the past year through user reports.

The IWF classifies content as AI-generated if there are obvious errors in the images, if the victim claims it is AI-based, or if the original creator confirms AI use. While still a small share of overall child sexual abuse material online, AI-generated content is rapidly expanding.

Of the AI-generated material, over 3,400 pieces involved “full-motion” videos, which are hyper-realistic and allow multiple people to interact. More than 65 percent of these videos depicted the most severe forms of abuse, including rape, sexual torture, and bestiality—classified under British law as the highest category of child sexual content. By comparison, only 43 percent of non-AI-generated material fell into these extreme categories, suggesting that perpetrators are using AI to produce more explicit, complex, and severe content than previously possible.

“We now face a technological landscape that can generate infinite violations with unprecedented ease,” Kerry Smith, IWF CEO, said.

The report also highlighted how offenders are actively developing and sharing AI tools. Discussions on the dark web reveal collaboration on custom AI models and databases for creating abusive material. In one case, an advertisement offered “custom courses” teaching users to generate images of teenagers. Many AI models now require only a single reference image to produce abusive content, lowering barriers for perpetrators who lack technical skills.

While most AI-generated material is relatively simple, a small number of skilled creators are producing more sophisticated, longer content. One individual, for example, was thanked over 3,000 times for producing a 30-minute AI-generated video of sexual abuse. The IWF noted that the material it has collected represents only a partial view of the problem, as analysts are limited by encrypted platforms and paywalls.

Smith urged the European Union to consider a bloc-wide ban on AI-generated child sexual abuse content and the tools used to create it. The ban would make it illegal even to generate personalized content privately. The IWF also recommends amending the EU AI Act to classify AI systems capable of generating such content as “high risk,” requiring rigorous testing before release.

The report follows European legislators’ temporary extension of the ePrivacy Directive, which allows platforms to detect CSAM content, giving lawmakers time to develop a permanent framework. Legislators emphasized that future measures should remain proportional, focusing only on content already flagged as potential child sexual material.

Smith stressed, “Advances in technology should never come at the expense of a child’s safety and well-being,” highlighting the urgent need for stronger safeguards against AI-enabled abuse.

You May Also Like

Politics

WASHINGTON — The Pentagon announced on Sunday that the United States will send a Terminal High Altitude Area Defense (THAAD) battery to Israel, alongside...

Health

NEW YORK — Teen smoking in the United States has reached an all-time low in 2024, with significant declines in overall youth tobacco use,...

Politics

WASHINGTON — As the countdown to the November 5 presidential election continues, former President Donald Trump is urging his supporters to aim for a...

Politics

In September, NASA announced that summer 2024 was the hottest on record. Just days later, the U.S. faced the dual impact of Hurricanes Helene...