AI is triggering a child-sex-abuse crisis

Photo of author

By admin


Disaster is brewing on dark-web forums and in schools.

Photograph of a shadow of hands holding a smartphone
Justin Smith / Getty

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

A disaster is brewing on dark-web forums, in messaging apps, and in schools around the world: Generative AI is being used to create sexually explicit images and videos of children, likely thousands a day. “Perhaps millions of kids nationwide have been affected in some way by the emergence of this technology,” I reported this week, “either directly victimized themselves or made aware of other students who have been.”

Yesterday, the nonprofit Center for Democracy and Technology released the latest in a slew of reports documenting the crisis, finding that 15 percent of high schoolers reported hearing about an AI-generated image that depicted someone associated with their school in a sexually explicit or intimate manner. Previously, a report co-authored by a group at the United Nations Interregional Crime and Justice Research Institute found that 50 percent of global law-enforcement officers surveyed had encountered AI-generated child-sexual-abuse material (CSAM).

Generative AI disrupts the major ways of detecting and taking down CSAM. Before the technology became widely available, most CSAM consisted of recirculating content, meaning anything that matched a database of known, abusive images could be flagged and removed. But generative AI allows for new abusive images to be produced easily and quickly, circumventing the list of known cases. Schools, meanwhile, aren’t adequately updating their sexual-harassment policies or educating students and parents, according to the CDT report.

Although the problem is exceptionally challenging and upsetting, the experts I spoke with were hopeful that there may yet be solutions. “I do still see that window of opportunity” to avert an apocalypse, one told me. “But we have to grab it before we miss it.”


A shadow of somebody holding a phone
Justin Smith / Getty

High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

By Matteo Wong

For years now, generative AI has been used to conjure all sorts of realities—dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases—perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been.

Yesterday, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center’s polling found, 15 percent of high schoolers reported hearing about a “deepfake”—or AI-generated image—that depicted someone associated with their school in a sexually explicit or intimate manner. Generative-AI tools have “increased the surface area for students to become victims and for students to become perpetrators,” Elizabeth Laird, a co-author of the report and the director of equity in civic technology at CDT, told me. In other words, whatever else generative AI is good for—streamlining rote tasks, discovering new drugs, supplanting human art, attracting hundreds of billions of dollars in investments—the technology has made violating children much easier.

Read the full article.


What to Read Next

On Wednesday, OpenAI announced yet another round of high-profile departures: The chief technology officer, the chief research officer, and a vice president of research all left the start-up that ignited the generative-AI boom. Shortly after, several news outlets reported that OpenAI is abandoning its nonprofit origins and becoming a for-profit company that could be valued at $150 billion.

These changes could come as a surprise to some, given that OpenAI’s purported mission is to build AI that “benefits all of humanity.” But to longtime observers, including Karen Hao, an investigative technology reporter who is writing a book on OpenAI, this is only a denouement. “All of the changes announced yesterday simply demonstrate to the public what has long been happening within the company,” Karen wrote in a story for The Atlantic. (The Atlantic recently entered a corporate partnership with OpenAI.)

Months ago, internal factions concerned that OpenAI’s CEO, Sam Altman, was steering the company toward profit and away from its mission attempted to oust him, as Karen and my colleague Charlie Warzel reported at the time. “Of course, the money won, and Altman ended up on top,” Karen wrote yesterday. Since then, several co-founders have left or gone on leave. After Wednesday’s departures, Karen notes, “Altman’s consolidation of power is nearing completion.”


P.S.

Earlier this week, many North Carolinians saw an AI-generated political ad attacking Mark Robinson, the disgraced Republican candidate for governor in the state. Only hours earlier, Nathan E. Sanders and Bruce Schneier had noted exactly this possibility in a story for The Atlantic, writing that AI-generated campaign ads are coming and that chaos might ensue. “Last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video,” they wrote. Despite many legitimate potential uses of AI in political advertising, and a number of state laws regulating it, a dearth of federal action leaves the door wide open for generative AI to wreck the presidential election.

— Matteo



Source link

Leave a Comment