In his essay “The Question Concerning Technology,” Martin Heidegger shares the words of the poet Hölderlin: “But where danger is, grows / The saving power also.” In other words, the cure to our cultural maladies can emerge from their very cause, precisely by virtue of the danger’s heightened stakes. This may be our technological situation today. In the rise of artificial intelligence, we may be witnessing the growth of the power that will save us from one of the subtler ailments of our age: the triumph of the image over the word.

At the beginning of man was the word. For it is the gift of language—of the ability to communicate through reason with others—that elevates him above the other animals. Our discursive powers are at the heart of our uniquely political nature; as Aristotle explains in the Politics

why man is a political animal in a greater measure than any bee or any gregarious animal is clear. For nature, as we declare, does nothing without purpose; and man alone of the animals possesses speech. 

Through language, the word relies on and bolsters man’s rationality and sociality—his ability to form ideas and convey them to others.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

But later appeared the image—the immediate, language-less depiction of some thing, the imposition of pure sensory impression. In contrast to the word’s propositionality, the image—in the form of the photograph, television show, Instagram post, or TikTok video, to name a few—has no content that can be judged true or false. And there is similarly no context in which to situate an image and to inform our engagement with it; when a picture or video appears on our X feed, we have no idea where it came from or why we are seeing it.

To be fair, the word remains a force in America today; we are still capable, for example, of recognizing children’s falling literacy rates as a disaster. But as the decline of reading and the rise of television attest—and more recently, the proliferation of pictures and video through social media—the image has long had the upper hand, and its dominance is growing. And as Daniel Boorstin, Neil Postman, Jacques Ellul, and others foresaw, the transition to a post-literate age has had terrible consequences for the rationality and seriousness of our public life.

Because of the image’s inherent ambiguity, one can never say what it “means,” except through recourse to words; one image cannot explicate the meaning of another. True, words can also be reduced to gobbledygook, but only by having their meaning corrupted. With images, there is no certain meaning to corrupt, no proposition to judge the truth or falsity of. As Nietzsche observed, “One cannot refute a disease of the eye. . . .  The concepts ‘true’ and ‘untrue’ have, as it seems to me, no meaning in optics.” Truth, and the rationality that the pursuit of it supports, risk obsolescence among a people of the image.

And since the image has no articulable meaning, it a fortiori has no meaning that can be shared between people. A society governed by the image must therefore be marked by solipsism, with each person trapped in a web of impressions and “vibes” that only he encounters. Our culture’s decline from rationality into absurdity, from logos to barbaric yawp, thus goes along with our declining sense of community; a shared engagement with the reason of the word is replaced by a solitary perception of the non-rationality of the image. In losing a public, deliberative language, we lose what makes a community possible, so that it would be more correct to say that we are becoming, not a people of the image, but a mass of persons of the image. Imagistic media work, as Ellul put it, by “enveloping us in a haze”; but it is a customized haze, in which each person is enveloped alone.

Is there any way out of this haze? Paradoxically, a solution may be emerging from the belly of the beast, in the form of the most insidious type of image yet: the deepfake.

In the past few years, generative AI has released waves of fake images and videos, adapting people’s appearances and voices to produce remarkably plausible doppelgängers. These deepfakes can already mimic anyone, from celebrities and politicians—you can even watch a never-ending (vulgar) Biden–Trump debate—to middle school girls. In January, the internet was flooded with graphic deepfakes of Taylor Swift, prompting X to block all searches for the singer. The deepfakes will only improve over time, making it impossible to tell, from their content, fact from fiction. And the quantity of deepfakes will grow along with their quality. While the text-to-video model Sora, developed by OpenAI, is not available to the public yet, anyone can purchase access to OpenAI’s text-to-image model, DALL·E 3. As access to and use of these programs spread, the number of deepfakes will skyrocket.

In the short term, the result will be chaos. We will all probably be duped at least once by a homily from Pope Francis, or an interview with LeBron James, that never happened. The upcoming presidential election is likely to accelerate the spawning of deepfakes and forgeries, sowing confusion and prompting alarm about “misinformation” from those who seek to control what the public has access to. But on the contrary, it is our complacent acceptance of the image, not our rejection of it, that is the real danger.

Over time, this chaos could free us from our reliance on the image, and out of our suspicion and even paranoia could emerge a more sober skepticism of the pictures and videos bombarding us. For too long, we have thought that a picture could be “truer” than words; you can spin an event however you want, but no amount of obfuscation, it has seemed, can refute the veracity of a picture. But as AI-generated pictures and videos spread and crowd out their human precursors, we may come to realize that this was an illusion.

AI has the potential to undo our age’s worst trends precisely by accelerating them and heightening their dangers.

 

When it becomes impossible to tell whether any image is real or fake, it will be pointless to rely on them, and the new default assumption may become that any image is doctored. Instead of believing whatever images we encounter online, we may learn to grant the assumption of veracity only to those images and videos shared by the most credible sources—friends, family, and colleagues who have earned our trust. Just as AI-generated images depend on human ones—the models train on real images taken by humans—the ability to deceive depends on a general assumption of truthfulness. But this is an unstable, self-undermining relationship. Realize that one has been told enough lies, and one learns not to trust everyone anymore; realize that enough of the images one encounters are fake, and one learns not to be fooled anymore. This skepticism we will feel upon seeing any image without knowing its source, and, knowing that that source is credible, will simply be what we should have felt all this time.

What’s more, the realization that AI images are parasitic upon real ones may impel people to reduce the sharing of their own images. As people realize the perversities and surveillance that their public selfies and YouTube confessionals are enabling, we may reach a new equilibrium in which our images return to being a private, controlled matter, shared only among a trusted few and leaving the word to fill in the new public vacuum. In this way, social trust would be not so much reduced as redistributed from vast networks, where no image is certain, to tighter circles of reliable sources that can prove their integrity.

Ellul therefore might have been too pessimistic to see as the only possible outcome for the image’s victims that “being plunged into an artificial world . . .  will cause them to lose their sense of reality and to abandon their search for truth.” Certainly, AI should move us to abandon our search for truth through the image. But abandoning the search for truth in the wrong place need not ruin the search for truth itself.

In a number of domains, AI has the potential to undo our age’s worst trends precisely by accelerating them and heightening their dangers. Those who anticipate technology’s ushering in an apocalypse aren’t wholly wrong, then, if we remember that an apocalypse is a disclosing. By disclosing the folly of our modern worship of the inarticulable Golden Calf, AI could put us on the path to becoming again a rational, serious people, a people of the word. Stranger felix culpas have happened.

Image by sdecoret and licensed via Adobe Stock.