The worrisome proliferation of child sexual abuse images on the internet could escalate dramatically unless action is taken to regulate artificial intelligence tools capable of generating deepfake photos, warns a vigilant watchdog agency. The agency’s concerns were raised on Tuesday.
In a meticulously prepared report, the UK-based Internet Watch Foundation calls on governments and technology providers to take swift action. They stress the need to prevent a deluge of AI-generated child sexual abuse images from overwhelming law enforcement investigators and significantly expanding the pool of potential victims.
Dan Sexton, Chief Technology Officer of the watchdog group, underscores the urgency, emphasizing, “This is not a hypothetical harm; this is a current crisis demanding immediate attention.”
In a groundbreaking South Korean case, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to fabricate 360 virtual child abuse images, according to the Busan District Court in the country’s southeast. Troublingly, there have been instances of minors using these tools against one another. For example, in a school in southwestern Spain, authorities are investigating allegations of teenagers using a mobile app to create nude images of their fully clothed classmates.
The report exposes a dark facet of the race to develop generative AI systems, allowing users to describe their desired output in words, spanning emails to original artwork or videos, and have the system generate it. If left unchecked, the surge in deepfake child sexual abuse images could overwhelm investigators, potentially leading to rescuing virtual characters rather than actual children. Furthermore, perpetrators could exploit these images to groom and coerce new victims.
Sexton reveals that IWF analysts uncovered the presence of famous children’s faces online, along with a significant demand for the creation of more images featuring children who were previously abused, possibly years ago. He laments, “They are taking existing authentic content as a basis to create new imagery of these victims. The extent of this is profoundly distressing.”
Sexton’s charitable organization, dedicated to combating online child sexual abuse, began receiving reports about abusive AI-generated imagery earlier this year, prompting an investigation into forums on the so-called dark web, a concealed part of the internet accessible only through anonymity-providing tools.
What the IWF analysts discovered were abusers exchanging tips and expressing amazement at how easy it is to turn their home computers into hubs for producing sexually explicit images of children of various ages. Some are even trading and attempting to profit from such images, which are becoming increasingly lifelike. Sexton observes, “We are witnessing an explosion of this content.”
While the IWF’s report serves more as an alert regarding a growing issue than a comprehensive solution, it urges governments to strengthen laws, making it easier to combat AI-generated abuse. It particularly calls upon the European Union, where discussions are ongoing about surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse, even if these images are not previously known to law enforcement.
A major focus of the group’s efforts is to prevent prior victims of sexual abuse from being re-victimized through the distribution of their images.
The report also suggests that technology providers can do more to make it challenging for their products to be misused in this manner, although this is complicated by the difficulty of reversing some of these tools.
While a new crop of AI image generators was introduced last year, captivating the public with their ability to produce whimsical or photorealistic images at will, most of these tools are not favored by producers of child sexual abuse material because they incorporate mechanisms to block such content.
By contrast, tools preferred by producers of child sexual abuse material, like the open-source Stable Diffusion developed by the London-based startup Stability AI, presented challenges. When it surfaced in the summer of 2022, a subset of users quickly learned to use it for generating nudity and pornography, primarily featuring adults, often in nonconsensual scenarios, such as celebrity-inspired nude pictures.
Stability AI later introduced new filters to block unsafe and inappropriate content, and their software license explicitly prohibits illegal use. In a statement, the company emphasized its strict prohibition of misuse for illegal or immoral purposes and its strong support for law enforcement actions against those who misuse their products.
However, older, unfiltered versions of Stable Diffusion remain accessible, and these are predominantly favored by individuals creating explicit content involving children, according to David Thiel, Chief Technologist of the Stanford Internet Observatory, another watchdog group addressing the issue.
Sexton conveys the challenge, noting, “You can’t regulate what people are doing on their computers, in their bedrooms. It’s not possible. So how do you get to the point where they can’t use openly available software to create harmful content like this?”
Many countries, including the U.S. and the U.K., have enacted laws banning the production and possession of such images, but the effectiveness of their enforcement remains uncertain.
The IWF’s report comes ahead of a global AI safety conference hosted by the British government, which will feature prominent figures such as U.S. Vice President Kamala Harris and tech leaders.
IWF CEO Susie Hargreaves remains optimistic, stating in a prepared statement, “While this report paints a bleak picture, I am optimistic. It is essential to communicate the realities of the problem to a wide audience because we need to have discussions about the darker side of this amazing technology.