AI-Generated Child-Sexual-Abuse Images Are Flooding the Web

Photo of author

By admin


For years now, generative AI has been used to conjure all sorts of realities—dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases—perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been.

This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center’s polling found, 15 percent of high schoolers reported hearing about a “deepfake”—or AI-generated image—that depicted someone associated with their school in a sexually explicit or intimate manner. Generative-AI tools have “increased the surface area for students to become victims and for students to become perpetrators,” Elizabeth Laird, a co-author of the report and the director of equity in civic technology at CDT, told me. In other words, whatever else generative AI is good for—streamlining rote tasks, discovering new drugs, supplanting human art, attracting hundreds of billions of dollars in investments—the technology has made violating children much easier.

Today’s report joins several others documenting the alarming prevalence of AI-generated NCII. In August, Thorn, a nonprofit that monitors and combats the spread of child-sexual-abuse material (CSAM), released a report finding that 11 percent of American children ages 9 to 17 know of a peer who has used AI to generate nude images of other kids. A United Nations institute for international crime recently co-authored a report noting the use of AI-generated CSAM to groom minors and finding that, in a recent global survey of law enforcement, more than 50 percent had encountered AI-generated CSAM.

Although the number of official reports related to AI-generated CSAM are relatively small—roughly 5,000 tips in 2023 to the National Center for Missing & Exploited Children, compared with tens of millions of reports about other abusive images involving children that same year—those figures were possibly underestimated and have been growing. It’s now likely that “there are thousands of new [CSAM] images being generated a day,” David Thiel, who studies AI-generated CSAM at Stanford, told me. This summer, the U.K.-based Internet Watch Foundation found that in a one-month span in the spring, more than 3,500 examples of AI-generated CSAM were uploaded to a single dark-web forum—an increase from the 2,978 uploaded during the previous September.

Overall reports involving or suspecting CSAM have been rising for years. AI tools have arrived amid a “perfect storm,” Sophie Maddocks, who studies image-based sexual abuse and is the director of research and outreach at the Center for Media at Risk at the University of Pennsylvania, told me. The rise of social-media platforms, encrypted-messaging apps, and accessible AI image and video generators have made it easier to create and circulate explicit, nonconsensual material on an internet that is permissive, and even encouraging, of such behavior. The result is a “general kind of extreme, exponential explosion” of AI-generated sexual-abuse imagery, Maddocks said.

Policing all of this is a major challenge. Most people use social- and encrypted-messaging apps—which include iMessage on the iPhone, and WhatsApp—for completely unremarkable reasons. Similarly, AI tools such as face-swapping apps may have legitimate entertainment and creative value, even if they can also be abused. Meanwhile, open-source generative-AI programs, some of which may have sexually explicit images and even CSAM in their training data, are easy to download and use. Generating a fake, sexually explicit image of almost anybody is “cheaper and easier than ever before,” Alexandra Givens, the president and CEO of CDT, told me. Among U.S. schoolchildren, at least, the victims tend to be female, according to CDT’s survey.

Tech companies do have ways of detecting and stopping the spread of conventional CSAM, but they are easily circumvented by AI. One of the main ways that law enforcement and tech companies such as Meta are able to detect and remove CSAM is by using a database of digital codes, a sort of visual fingerprint, that correspond to every image of abuse that researchers are aware of on the web, Rebecca Portnoff, the head of data science at Thorn, told me. These codes, known as “hashes,” are automatically created and cross-referenced so that humans don’t have to review every potentially abusive image. This has worked so far because much conventional CSAM consists of recirculated images, Thiel said. But the ease with which people can now generate slightly altered, or wholly fabricated, abusive images could quickly outpace this approach: Even if law-enforcement agencies could add 5,000 instances of AI-generated CSAM to the list each day, Thiel said, 5,000 new ones would exist the next.

In theory, AI could offer its own kind of solution to this problem. Models could be trained to detect explicit or abusive imagery, for example. Thorn has developed machine-learning models that can detect unknown CSAM. But designing such programs is difficult because of the sensitive training data required. “In the case of intimate images, it’s complicated,” Givens said. “For images involving children, it is illegal.” Training an image to classify CSAM involves acquiring CSAM, which is a crime, or working with an organization that is legally authorized to store and handle such images.

“There are no silver bullets in this space,” Portnoff said, “and to be effective, you are really going to need to have layered interventions across the entire life cycle of AI.” That will likely require significant, coordinated action from AI companies, cloud-computing platforms, social-media giants, researchers, law-enforcement officials, schools, and more, which could be slow to come about. Even then, somebody who has already downloaded an open-source AI model could theoretically generate endless CSAM, and use those synthetic images to train new, abusive AI programs.

Still, the experts I spoke with weren’t fatalistic. “I do still see that window of opportunity” to stop the worst from happening, Portnoff said. “But we have to grab it before we miss it.” There is a growing awareness of and commitment to preventing the spread of synthetic CSAM. After Thiel found CSAM in one of the largest publicly available image data sets used to train AI models, the data set was taken down; it was recently reuploaded without any abusive content. In May, the White House issued a call to action for combatting CSAM to tech companies and civil society, and this summer, major AI companies including OpenAI, Google, Meta, and Microsoft agreed to a set of voluntary design principles that Thorn developed to prevent their products from generating CSAM. Two weeks ago, the White House announced another set of voluntary commitments to fight synthetic CSAM from several major tech companies. Portnoff told me that, while she always thinks “we can be moving faster,” these sorts of commitments are “encouraging for progress.”

Tech companies, of course, are only one part of the equation. Schools also have a responsibility as the frequent sites of harm, although Laird told me that, according to CDT’s survey results, they are woefully underprepared for this crisis. In CDT’s survey, less than 20 percent of high-school students said their school had explained what deepfake NCII is, and even fewer said the school had explained how sharing such images is harmful or where to report them. A majority of parents surveyed said that their child’s school had provided no guidance relating to authentic or AI-generated NCII. Among teachers who had heard of a sexually abusive deepfake incident, less than 40 percent reported that their school had updated its sexual-harassment policies to include synthetic images. What procedures do exist tend to focus on punishing students without necessarily accounting for the fact that many adolescents may not fully understand that they are harming someone when they create or share such material. “This cuts to the core of what schools are intended to do,” Laird said, “which is to create a safe place for all students to learn and thrive.”

Synthetic sexually abusive images are a new problem, but one that governments, media outlets, companies, and civil-society groups should have begun considering, and working to prevent, years ago, when the deepfake panic began in the late 2010s. Back then, many pundits were focused on something else entirely: AI-generated political disinformation, the fear of which bred government warnings and hearings and bills and entire industries that churn to this day.

All the while, the technology had the potential to transform the creation and nature of sexually abusive images. As early as 2019, online monitoring found that 96 percent of deepfake videos were nonconsensual pornography. Advocates pointed this out, but were drowned out by fears of nationally and geopolitically devastating AI-disinformation campaigns that have yet to materialize. Political deepfakes threatened to make it impossible to believe what you see, Maddocks told me. But for victims of sexual assault and harassment, “people don’t believe what they see, anyway,” she said. “How many rape victims does it take to come forward before people believe what the rapist did?” This deepfake crisis has always been real and tangible, and is now impossible to ignore. Hopefully, it’s not too late to do something about it.



Source link

Leave a Comment