Keeping track of events during a natural disaster was hard enough in the past, before people with dubious motives started flooding social media with sensational images generated by artificial intelligence. In a crisis, public officials, first responders, and people living in harm’s way all need reliable information. The aftermath of Hurricane Helene has shown that, even as technology has theoretically improved our capacity to connect with other people, our visibility into what’s happening on the ground may be deteriorating.
Beginning late last week, Helene’s storm surge, winds, and rains created a 500-mile path of destruction across the Southeast. To many people’s surprise, the storm caused catastrophic flooding well inland—including in and around Asheville, North Carolina, a place that had frequently been labeled a “climate haven.” Pictures that many users assumed had been taken somewhere around Asheville began spreading rapidly on social media. Among them were photographs of pets standing on the rooftops of buildings surrounded by water; another image showed a man wading through a flood to rescue a dog. But news outlets that took a closer look noted that the man had six fingers and three nostrils—a sign that the image was a product of AI, which frequently gets certain details wrong.
The spread of wild rumors has always been a problem during major disasters, which typically produce power outages and transportation obstacles that interfere with the communication channels that most people rely on from day to day. Most emergency-management agencies gather information from local media and public sources, including posts from local citizens, to determine where help is needed most. Noise in the system hinders their response.
In past crises, emergency managers at all levels of government have relied on local media for factual information about events on the ground. But the erosion of the local-news industry—the number of newspaper journalists has shrunk by two-thirds since 2005, and local television stations face serious financial pressure—has reduced the supply of reliable reporting.
For a time, the social-media platform formerly known as Twitter provided countervailing benefits: Information moved instantaneously, and by issuing blue checks in advance to authenticated accounts, the platform gave users a way of separating reliable commentators from random internet rumormongers. But under its current owner, Elon Musk, the platform, renamed X, has changed its algorithms, account-verification system, and content-moderation approach in ways that make the platform less reliable in a crisis.
Helene seemed to prove the point. X was awash in claims that stricken communities would be bulldozed, that displaced people would be deprived of their home, even that shadowy interests are controlling the weather and singling some areas out for harm. The Massachusetts Maritime Academy emergency-management professor Samantha Montano, the author of Disasterology: Dispatches From the Frontlines of the Climate Crisis, declared in a post on X that Helene was “Twitter’s last disaster.”
It was also AI’s first major disaster. The fake images of devastation that proliferated on X, Facebook, and other platforms added to the uncertainty about what was happening. Some users spreading those images appear to have been trying to raise money or commandeer unsuspecting eyeballs for pet projects. Other users had political motives. To illustrate claims that Joe Biden and Kamala Harris had abandoned Helene’s victims, right-wing influencers shared an AI-generated image of a weeping child holding a wet puppy. Another fake viral image showed Donald Trump wading through floodwaters.
Disinformation—fast and unreliable—filled a vacuum exacerbated by power outages, bad cell service, and destroyed transportation routes; it then had to be swatted back by legacy media. Local print, television, and radio newsrooms have made a heroic effort in covering Helene and its aftermath. But they, too, are forced to devote some of their energies to debunking the rumors that nonlocals promote on national platforms.
Unfortunately, the unfolding information crisis is likely to get worse. As climate change produces more frequent weather-related disasters, many of them in unexpected places, cynical propagandists will have more opportunities to make mischief. Good sources of information are vulnerable to the very climate disasters they are supposed to monitor. That’s true not just of local media outlets. In an ironic turn, Helene’s path of destruction included the Asheville headquarters of the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information, which tracks climate data, including extreme weather.
More disasters await us. We need to view reliable communications as a safety precaution in its own right—no different from sea walls or a tornado shelter.
Over time, technological advances should allow for ever more precise monitoring of weather conditions. But our broader disaster-response system is buckling, because it relies on communication and collaboration among government officials, first responders, and citizens—and some of the assumptions under which it developed no longer hold. Officials cannot reach everyone through local media outlets; photos and videos purportedly taken in a disaster are not definitive proof; the number of people who deliberately spread misinformation is nontrivial, and doing so is getting easier. Government officials need to keep these constraints in mind in all their communications with the public. FEMA is adapting; it now has a webpage dedicated to dispelling rumors.
But the burden also falls on average citizens. Emergency managers regularly urge people to stockpile 72 hours’ worth of food or water. But Americans should also be planning their disaster-media diet with similar care. That means following only known sources, learning how to identify doctored photos and videos, and understanding the danger of amplifying unverified claims. In moments of crisis, communities need to focus on helping people in need. The least we all can do is avoid adding to the noise.