Four theories that explain AI art’s default vibe

Photo of author

By admin


The image-makers are stuck in a pattern.

Illustration depicting many samey, AI-looking images in a series of frames
Illustration by The Atlantic. Sources: Getty

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

At this point, AI art is about as remarkable as the email inviting you to save 10 percent on a new pair of jeans. On the one hand, it’s miraculous that computer programs can synthesize images based on any text prompt; on the other, these images are common enough that they’ve become a new kind of digital junk, polluting social-media feeds and other online spaces with no particular payoff to users.

But their big spam energy isn’t just a question of volume—these images also tend to look pretty similar. As my colleague Caroline Mimbs Nyce writes in a new story for The Atlantic, “Two years into the generative-AI boom, these programs’ creations seem more technically advanced … but they are stuck with a distinct aesthetic.” By default, these models are inclined to produce images with bright, saturated colors; beautiful and almost cartoonish people; and dramatic lighting. Caroline spoke with experts who gave her four theories on why that is.

Ultimately, her reporting suggests that although tech companies are competing to offer more compelling image generators, the products aren’t actually all that different in the end—the situation is more “Pepsi vs. Coke” than “Toyota vs. Mercedes.” Perhaps people will simply use whichever image generator is most convenient. That may explain why companies such as X, Google, and Apple are so eager to build these models into existing platforms: Image generators aren’t magic anymore, but a feature to be checked off.


Illustration depicting many samey, AI-looking images in a series of frames
Illustration by The Atlantic. Source: Getty.

Why Does AI Art Look Like That?

By Caroline Mimbs Nyce

This week, X launched an AI-image generator, allowing paying subscribers of Elon Musk’s social platform to make their own art. So—naturally—some users appear to have immediately made images of Donald Trump flying a plane toward the World Trade Center; Mickey Mouse wielding an assault rifle, and another of him enjoying a cigarette and some beer on the beach; and so on. Some of the images that people have created using the tool are deeply unsettling; others are just strange, or even kind of funny. They depict wildly different scenarios and characters. But somehow they all kind of look alike, bearing unmistakable hallmarks of AI art that have cropped up in recent years thanks to products such as Midjourney and DALL-E.

Read the full article.


What to Read Next

  • Trump finds a new Benghazi: Earlier this week, Donald Trump falsely claimed that Kamala Harris had “A.I.’d” a photograph of a crowd at one of her campaign rallies—alleging, in other words, that she had doctored or outright fabricated an image in order to exaggerate the number of people cheering her on. As Matthew Kirschenbaum writes for The Atlantic, Trump’s use of the term may have less to do with the technology per se and more to do with giving his supporters something to post about—“a way of licensing them to follow his example by filling up the text boxes on their own screens.”

P.S.

AI art may actually be at its best with an audience of one. “Approaching generative image creators in order to produce a desired result might get their potential exactly backwards,” Ian Bogost wrote for The Atlantic last year. “AI can give them shape outside your mind, quickly and at little cost: any notion whatsoever, output visually in seconds. The results are not images to be used as media, but ideas recorded in a picture.”

— Damon



Source link

Leave a Comment