Skip to main contentSkip to navigationSkip to navigation
Illustration shows AI (Artificial Intelligence) letters and computer motherboard<br>FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard.
The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION. Photograph: Dado Ruvić/Reuters
The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION. Photograph: Dado Ruvić/Reuters

AI image generators trained on pictures of child sexual abuse, study finds

This article is more than 4 months old

Images might have helped AI systems produce realistic sexual imagery of fake children; the database was taken down in response

Hidden inside the foundation of popular artificial intelligence (AI) image generators are thousands of images of child sexual abuse, according to new research published on Wednesday. The operators of some of the largest and most-used sets of images utilized to train AI shut off access to them in response to the study.

The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. More than 1,000 of the suspected images were confirmed as child sexual abuse material.

“We find that having possession of a LAION‐5B dataset populated even in late 2023 implies the possession of thousands of illegal images,” researchers wrote.

The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION said it was temporarily removing its datasets. LAION, which stands for the non-profit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them”.

While the images account for just a fraction of LAION’s index of about 5.8bn images, the Stanford group says it is probably influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.

Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world, the researchers say. Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they have learned from two separate categories of online images – adult pornography and benign photos of kids.

Trying to clean up the data retroactively is difficult, so the Stanford Internet Observatory is calling for more drastic measures. One is for anyone who has built training sets based on LAION‐5B – named for the more than 5bn image-text pairs it contains – to “delete them or work with intermediaries to clean the material”. Another is to in effect make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.

“Legitimate platforms can stop offering versions of it for download,” particularly if they are frequently used to generate abusive images and have no safeguards to block them, said the Stanford Internet Observatory’s chief technologist, David Thiel, who authored the report.

It’s not an easy problem to fix, and traces back to many generative AI projects being “effectively rushed to market” and made widely accessible because the field is so competitive, said Thiel.

“Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention,” Thiel said in an interview.

A prominent LAION user that helped shape the dataset’s development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year – which Stability AI says it didn’t release – is still baked into other applications and tools and remains “the most popular model for generating explicit imagery”, according to the Stanford report.

“We can’t take that back. That model is in the hands of many people on their local machines,” said Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, which runs Canada’s hotline for reporting online sexual exploitation.

Stability AI on Wednesday said it only hosts filtered versions of Stable Diffusion and that “since taking over the exclusive development of Stable Diffusion, Stability AI has taken proactive steps to mitigate the risk of misuse”.

skip past newsletter promotion

“Those filters remove unsafe content from reaching the models,” the company said in a prepared statement. “By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.”

LAION said this week it developed “rigorous filters” to detect and remove illegal content before releasing its datasets and was still working to improve those filters. The Stanford report acknowledged that LAION’s developers made some attempts to filter out “underage” explicit content but might have done a better job had they consulted earlier with child safety experts.

Much of LAION’s data comes from another source, Common Crawl, a repository of data constantly trawled from the open internet, but Common Crawl’s executive director, Rich Skrenta, said it was “incumbent on” LAION to scan and filter what it took before making use of it.

Many text-to-image generators are derived in some way from the LAION database, though it’s not always clear which ones. OpenAI, maker of Dall-E and ChatGPT, said it didn’t use LAION and had fine-tuned its models to refuse requests for sexual content involving minors.

Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database “uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes”.

LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who said earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isn’t controlled by a handful of powerful companies.

Most viewed

Most viewed