Technology changes us. Electric light bulbs, and gas lamps before that; automobiles and gasoline-powered farm equipment; gunpowder and steam locomotion; clocks and the printing press; bronze and iron: these technologies took hold because they helped us to achieve what we wanted, and they created opportunities for new things we didn’t know we wanted. In doing so, they also changed our patterns of behavior, our relationships, family dynamics and economic institutions, the nature of political authority and social status, and our very sense of self. In other words, these technologies changed not only what we wanted, but also the we that were doing the wanting.
This observation is neither new nor controversial. Plato, in the Phaedrus, has Socrates remark that the very invention of writing had psychic and social costs. We are used to promoting writing as an essential skill of a well-developed soul (and Plato used it to great effect), but Socrates argues that as a tool for reminding, writing weakened the faculty of remembering—“implanting forgetfulness in [our] souls.” He warned that, after the advent of writing, people will “seem to know much, while for the most part they know nothing, . . . filled, not with wisdom but with the conceit of wisdom, they will be a burden to their fellows.”
Digital devices are changing us—especially portable devices, and their most popular applications, social media—combining communication with entertainment. This was less well understood several years ago, but now it is common knowledge. The phenomenon invites multiple levels of analysis, and has been addressed from the perspectives of political theory, media studies, sociology, childhood development, and more. If you are interested in how digital media influences your sense of your political and bodily agency, read Matthew Crawford’s The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. If you are interested in how digital media has fostered social alienation and isolation, learn from Sherry Turkle’s book Alone Together: Why We Expect More from Technology and Less from Each Other. For practical advice about how to discipline your use of digital media, try Cal Newport’s Deep Work: Rules for Focused Success in a Distracted World. And for neuroscientific analysis of the way the new online ecosystem puts such stress on our old biological hardware, there is The Distracted Mind: Ancient Brains in a High-Tech World by Adam Gazzaley and Larry D. Rosen.
Following Plato’s lead, we can also ask about the effect of the smartphone at a deeper level. What does it change in our soul? What appetites does it feed, and what does it starve? What powers does it exercise, and which does it allow to atrophy?
Start your day with Public Discourse
Sign up and get our daily essays sent straight to your inbox.In this essay, I want to explore how digital technology is changing our understanding of ourselves—our sense of self-awareness, our interior life, our very experience of agency. I want to examine not only in what ways are those changing, but by what mechanisms. Increasingly, we are aware that the personal effects of digital technology are not accidental—that in many ways the devices, and the applications they run, are designed to change us, engineered specifically to capture and modify our sense of self, how we think, and what we do. Most attention to the problem of social media focuses on other effects—political polarization, media manipulation, economic exploitation, depression and anxiety, relationship breakdown, disrupted family dynamics, body image and bullying—but to understand those things, we can’t stop there. We need to ask what the new technology does to our inner life.
The designers of digital technology have paid particular attention to the closest thing our contemporary culture has to a classical analysis of the powers of the soul: cognitive science, and especially neuroscience. One doesn’t need to be an expert in these areas to appreciate how the insights and limits of modern neuroscience can illuminate, and be illuminated by, some basic insights of classical philosophical psychology. Neuroscience helps us understand how digital media is changing us, but we need a more classical language about the powers of the soul to understand, and protect ourselves from, the most ominous of these changes.
I want to explore how digital technology is changing our understanding of ourselves—our sense of self-awareness, our interior life, our very experience of agency. . . . We need to ask what the new technology does to our inner life.
A Brief History of the Digital Experiment
The term “smartphone” was coined in the 1990s, but the word—and for most people the very idea—didn’t become common until the first iPhone was released in 2007. Within three years, Apple released the first iPad, the iPhone was well established in world markets, and other web-browsing, app-based, touch-screen pocket computers were competing for market share.
2010 was also the year Nicholas Carr published The Shallows: What the Internet Is Doing to Our Brains. Carr had noticed that spending more time online had changed his habits of attention as a reader and writer, and he turned to neuroscience to explain his experience. The way we interact with the internet, he said, “rewires our brains”; surfing the web “makes us shallow thinkers.” The interruption, scrolling, and clicking all break up attention and prevent information from getting from working memory to long-term memory, and the abundance of stimuli presents too much “cognitive load.” The brain adapts, thanks to its neuroplasticity, literally reconfiguring its patterns of activity, making it easier for us to keep scrolling and “multi-tasking,” but harder to concentrate, to remember, to contemplate. Although concerned mainly with the internet in general, rather than specifically with portable devices, Carr described phenomena that the advent of the smartphone greatly accelerated.
Carr’s attention to the neurological effects of online life is well complemented by a reflection, five years ago, from Andrew Sullivan, when he checked himself into rehab for digital addiction. The 7,000-word essay he published about that experience—what led him there, how close he was to breaking down, what it was like to suffer withdrawal—highlights the spiritual stakes. The original print title was, “My Distraction Sickness and Yours” (New York Magazine, September 19, 2016). Online, and in his 2021 collection Out on a Limb, it is: “I Used to Be a Human Being.” It describes more than just his personal experience but “a new epidemic of distraction” which “is our civilization’s specific weakness.” The problem is spiritual: as Sullivan put it, the “threat is not so much to our minds, even as they shape-shift under the pressure. The threat is to our souls. At this rate, if the noise does not relent, we might even forget we have any” (emphasis added).
Sullivan aptly draws on traditional religion to make sense of this challenge.
The Judeo-Christian tradition recognized a critical distinction—and tension—between noise and silence, between getting through the day and getting a grip on one’s whole life. The Sabbath—the Jewish institution co-opted by Christianity—was a collective imposition of relative silence, a moment of calm to reflect on our lives under the light of eternity.
According to Sullivan, the loss of a cultural habit of Sabbath “changes us. It slowly removes—without our even noticing it—the very spaces where we can gain a footing in our minds and souls that is not captive to constant pressures or desires or duties. And the smartphone has all but banished them.”
I want to try to bridge the gap between these two discourses: brains and souls, the neuroscientific objectivity of Carr and the spiritual interiority of Sullivan. I believe philosophical approaches to the soul help to bridge that gap. But first, it will help for us to examine an attempt to bridge it technologically: the project of “artificial intelligence.” Advances in computer processing that have made portable digital devices possible have also enabled the collection and analysis and use of data on a scale and in a manner never before imagined. Developments in artificial intelligence are informing the design of digital technology. It is not that digital media devices just happen to change us; more than any other previous technology, they are designed to change us.
Admitting We Have a Problem
We know that social media is not really “free.” We aren’t the customer, but the product. That is to say, by using social media, we are voluntarily providing information about ourselves, and that information is valuable. But a common misunderstanding is that social medial companies collect data to sell it. This leads many to assume the problems we face relate primarily to privacy and economic exploitation. In fact, however, social media companies don’t exactly sell the data they collect. That’s because it is too valuable to sell.
Every major smartphone app, especially social media, is the interface for an artificial intelligence “algorithm” which constantly processes everything it “learns” about you, updating a virtual representation of you, testing hypotheses about it against your real behavior, and continuing to update the model. The goal is not merely to predict your patterns of behavior, but, by presenting you with customized digital stimuli, to actually shape what you do. What is commodified is not information from and about you, but your very attention and behavior.
The closest analogy is to the insidious, absurd, but dangerous manipulation of demons as described by C. S. Lewis in The Screwtape Letters. Like Screwtape and Wormwood, digital technology companies observe and gather and analyze information about you, and it is not the “data” itself they seek to harvest, but your very mind and your will. Jaron Lanier, a former artificial intelligence innovator who has become a sharp critic and an evangelist for more responsible technology, clarifies that the “product” of social media is not information or attention but “the gradual, slight imperceptible change in your own behavior and perception.”
Weaponized Neuroscience
If unaccountable powers are using weaponized neuroscience in the form of AI algorithms to manipulate us, maybe there should be some oversight? It is increasingly common to advocate greater regulation of tech companies, and last year a Facebook employee “blew the whistle” on her company—asking the government to direct Facebook’s political influence. If the algorithms are manipulating us, we might want to appeal to a power strong enough to make sure it manipulates us the right way.
Alternatively, maybe we could design the algorithms better, or contrive a new algorithm to protect us? Thrive Global is a company that helps organizations create a “behavior change ecosystem.” Its website currently touts its “BEHAVIOR CHANGE TECHNOLOGY PLATFORM,” which promises “A holistic approach to increasing your people’s well-being and resilience.” The Thrive app uses artificial intelligence to develop the “Whole Human”: “Supporting your people’s physical, mental and emotional well-being in one comprehensive platform.”
Thrive’s holistic behavior modification app was enabled by the purchase, in 2019, of a company called “Boundless Mind,” a “neuroscience-based artificial intelligence company” offering an “[AI] platform for behavior design.” Website copy from 2018 bragged about building “engagement and retention” and helping app usage to “Becom[e] a user’s habit.” Our “habits are programmable. . . . The Boundless AI optimizes when and how to praise and encourage each user uniquely.”
“Boundless Mind” was itself a strategic 2017 rebrand of a company originally founded in 2014 as Dopamine Labs. Its website was unapologetic about the drug-addiction metaphor: usedopamine.com announced: “Keeping users engaged isn’t luck: it’s science. Give users the right [hit] of dopamine at the right moment and they’ll stay longer, do more, and monetize better.” Cofounder Randy Brown said: “We crafted for it to learn something about the structure of how human motivation works. It is now gathering enough data on its own to make meaningful observations to change human behavior.” Their home page announced it was “build[ing] the future of web-scale mind control.”
A technocrat only focuses on the effectiveness of a means to a given end, with no wonder about what the end should be. That may well describe the operation of artificial intelligence, but it can’t be the attitude of intelligent human beings who want to know how to resist the influence of the algorithm.
Notice the progression: from “get them hooked” and “brain hacking” and “mind control,” to “behavior design” and “behavior engineering,” and then to “behavior change” for “holistic” “well-being.” Rebranded from engineering addiction to cultivating wellness, the packaging has changed, and perhaps the intention has too; but the method and mechanism is the same. “Artificial intelligence” does not fortify the self against manipulation. It only submits it to a more comprehensive, benevolent form of control.
It is understandable that we would respond to the dangers of technology by seeking better oversight or more humane design. But if that is all we do, we are only accepting the technocratic paradigm that gave rise to our concerns—a paradigm that sees people as means, not ends; that doesn’t take responsibility for action; that doesn’t care about truth. A technocrat only focuses on the effectiveness of a means to a given end, with no wonder about what the end should be. That may well describe the operation of artificial intelligence, but it can’t be the attitude of intelligent human beings who want to know how to resist the influence of the algorithm.
If the algorithm is functionally demonic—if what it achieves through digital processing is exactly what C. S. Lewis depicts the fallen angels doing in his famous portrayal of temptation—we should not want to centralize, regulate, or even tame its influence. We must find some way to protect ourselves from it, and to strengthen the powers of our soul against it.
How Is the Soul in the Body?
How threatening you find the insidious algorithm will depend, to some degree, on how you think human consciousness is related to our biology. For if you think human agency and awareness is entirely a function of and dependent on physical processes, it would be very scary indeed to realize just how much power modern technology can and will exercise over your body. It implies the possibility of not merely technological temptation and manipulation, but of technological oppression and possession.
That agency and awareness are functions of physical processes is a common assumption—but not a necessary assumption—of modern neuroscience. The project of neuroscience proceeds with the working hypothesis that by understanding the operations of physical, physiological systems, we will be able to explain the cognitive life of human beings.
This hypothesis goes way back. Plato, in the Phaedo, puts an argument in the mouth of Simmias: that our life or psyche or soul is a function of the physical activity of our parts. To communicate this, he strikes on a charming metaphor: the soul is like the harmony of an instrument. This is an attractive hypothesis, for it seems to account for features of the life-power that we regard as mysterious—like music coming from an instrument, it is invisible, it is valuable, it can’t exactly be reduced to physical structure, nor located in one part or another of the physical instrument. And yet one can “kill” a harmony by physically damaging the instrument. The soul as harmony—as a function of physical activity—implies all this.
In the Phaedo, Socrates quickly demolishes the theory, reasoning that a harmony has no power to direct or rule the instrument; rather, the instrument directs or rules the harmony. A soul, however, is more like the musician than like the music: it directs the actions of the body. Socrates does not deny that the soul is affected by the body, nor that we can learn about the soul’s powers by studying the body. But, in large part due to its ruling or providence over the body, the soul is even more marvelous than a musical harmony.
There is much to wonder about here, and we could say, to modify a phrase from Plato and Aristotle describing philosophy, that neuroscience begins with wonder.
How are particular life functions related to bodily activity? This is the key question that motivated the Aristotelian understanding of the soul, and it proceeded to differentiate various life powers—the different things a living organism does to move and grow and negotiate its environment. To understand human life, we especially need to differentiate various cognitive powers or modes of awareness, some but not all of which we share with other animals. We tend to think of consciousness or intelligence as one function, but the mystery of it is that it draws on and unifies so many different functions: sensing, feeling, imagining, evaluating, judging, anticipating, wishing, guessing, remembering, calculating, intending, deliberating, wondering, contemplating. We seem to have more control over some of these functions than others, more responsibility to direct or exercise them at will.
Neuroscience imagines each of these as a function of something physical in our bodies, and so, in principle, detectable and even replicable. For hundreds of years we didn’t have tools (like electromagnetic resonance imaging, or x-rays, or even sophisticated surgery) to detect that activity; nor did we even have the theoretical models (like chemistry, or atomic and electromagnetic theories) that have made modern neuroscience what it is; nor did we have the ability to try to build replicas or simulations of such activities. In fact, all these arise and advance together.
But something like the neuroscientific aspiration—to discover and explain the inner bodily workings of cognitive experience—did evolve even without the assumption that all cognitive experience is a function of bodily activity. Neuroscience didn’t begin with materialism, it began with wonder, and it proceeds, as Plato’s Simmias did, through metaphors.
Modeling the Soul
As Plato’s Phaedo shows, the key question raised by neuroscience is: To what shall we analogize the soul’s activity? Empirical study of the mechanisms of life function does not require the Simmias hypothesis that the soul is a harmony, but it seems to draw energy from having some metaphor or other. We can see the development of neuroscience as grasping for ever more advanced and complicated metaphors for how the wondrous activity of mind could be connected to bodily activity.
Some ancient thinkers thought of the body as a series of pumps and vessels. Descartes thought of the body as a machine. Indeed, he took the metaphor so literally that he believed that life and consciousness were in principle separable from the body. He regarded consciousness and embodiedness as so different that the one could be imagined totally without the other—I may be dreaming, and the I that is dreaming may only be an immaterial being, an angel.
The discovery of electricity gave rise to a new set of metaphors—the soul as charges moving through circuits—and the invention of computers provided an even more elaborate metaphor. Indeed, computers are so sophisticated we started by comparing their functions to human ones: storage as “memory,” and processing as “decisions.” But now that the computer is more familiar and comprehensible than the mysterious brain, it has become the root of the metaphor for human “processing” and “retrieval.” Many have imagined that human consciousness is software that could, at least in principle, be uploaded to run on another platform. The technological details have changed, but we are not far from Simmias’s metaphor of a musical instrument’s harmony.
Increasingly, however, neuroscientists themselves are facing the fact that the mechanistic models are inadequate, and that trying to isolate the brain from the rest of the body may itself be as much a mistake as Descartes’s isolating the pineal gland from the rest of the brain. It seems we think with our whole bodies—an important branch of neuroscience explores the cognitive significance of our intestinal tract!—and organic life has proven stubbornly irreducible. Not only can we treat the brain almost as an organism in its own right, we can treat it as a collection of organisms. Empirical research itself is running up against the limits of the brain-as-computer model. Both the complexity of the brain, and evolutionary accounts of its development, lead us to compare it to an organism, or even to a collection of organisms, in a kind of mysterious parliament. With his “Thousand Brains” hypothesis, Jeff Hawkins imagines the brain as a kind of socio-political entity, with collections of structures building consensus, voting, vetoing.
Neuroscience often learns from, and contributes to, the field of “Artificial Intelligence”—attempts to produce (simulate? replicate? approximate?) these living functions in machines. Along the way, both neuroscience and artificial intelligence find greater need to attend to psychology and philosophy, in the form of “theoretical neuroscience,” “cognitive science,” or “cognitive psychology.” In other words, even the empirical study of animal life activity cannot leave behind the original Aristotelian reflection on the different kinds of consciousness or cognition.
Abstract thought is more than “processing information” but understanding concepts and affirming truth: comprehending realities that transcend whatever physical means may encode or communicate them.
Which metaphor-hypothesis neuroscientists work with informs their research into the brain, and it also suggests the best hopes of replicating the brain’s work by artificial means. So maybe your consciousness is not a harmony, a pump, a network of gears and levers, a collection of circuits, nor even a deterministic software program. Maybe, instead, it is a complex, “self-learning” algorithm, rewriting itself in iterative feedback loops of parallel processing. Maybe.
Yet there remain some pesky functions, used even in pursuing empirical science, which are not captured in even our most advanced metaphors or models. Human memory is not mere retrieval but an act of conjuring, inherently creative and akin to imagination. Deciding is not simply running a subroutine but an exercise of agency, of will. And abstract thought is more than “processing information” but understanding concepts and affirming truth: comprehending realities that transcend whatever physical means may encode or communicate them.
To the extent that the algorithm is still played out only in and through physical activity, it is no more like a rational soul than is a musical harmony. The same is true of any metaphor or model that would reduce living and thinking to motion in a machine. If Plato and Aristotle are right, we won’t be able to “build” a brain, only to simulate it, and really to simulate only some of its cognitive functions, without the ones that most make us human.
Spiritual Armor and Weapons
Practical evidence of this seems to be that the most insidious algorithms are limited to reading and stimulating physical phenomena; like angelic natures, they cannot directly control our intellect and will. They seek them, but can never possess them without our consent. Defense against the new dark arts of Silicon Valley thus relies on the same tools as ancient spiritual warfare, especially custody of our attention.
Demons attack our weaknesses, the vices the make us vulnerable. Hence, virtues have been called “spiritual armor” protecting us from assault. Screwtape counsels Wormwood to do anything possible to distract his victim from engaging in basic exercises of will and reason. Going for a walk, reading a book, even asking questions—these are all powerful human defenses against the distractions of the devil. Individual acts of thinking and choosing for oneself, exercising self-awareness and taking responsibility for one’s actions and thoughts—the distinctive activities of the rational animal—are themselves safeguards against the soul-snatchers’ designs.
Reading a philosophical essay, I hope, can be an occasion for soul-strengthening too. Understanding concepts, interpreting language, following arguments, all require a disciplined focus of spiritual powers. Contemplating, wondering, and asking why are all exercises of rational attention. All of these intentional human acts are, in the face of temptation, subversive and protective.
Digital technology depends on, fosters, and exploits cognitive intemperance. This is why so many of the books mentioned at the beginning address the problem of distraction.
Perhaps this can also give us new appreciation for the power of prayer, sometimes described as a spiritual weapon. More than any other deliberate activity, prayer activates and directs the soul’s various modes of cognition, disciplining them and orienting them to deeper understanding of self and union with God. Think of the four phases of traditional lectio divina—reading, meditation, prayer, and contemplation—each focusing and directing the intellect and will and ordering them to God: receiving and interpreting words, considering their meaning and application, addressing God in specific intentions, and lovingly receiving God’s presence.
Or consider the Spiritual Exercises of Saint Ignatius of Loyola, a classic handbook for retreat. Its cycles of prayer repeat three steps, each demanding discipline over one’s attention:
- First, composition: exercising imagination and memory—recalling sins, visualizing oneself in the presence of other people, even imagining particular sensations of smell, touch, hearing—all to be more aware of one’s soul and put oneself in the presence of God.
- Second, analysis: activating the intellect: conceiving, understanding, and assenting to truths, reasoning about their implications and connections, contemplating them.
- Third, colloquy: reflecting on choices and principles of choice, resolving to make good decisions, exerting the will in acts of humility and love.
This method of prayer exercises the traditional Trinitarian powers of the soul—Memory, Intellect, and Will. These are three powers that, incidentally, the algorithm wants but cannot access without our cooperation, three powers that the most advanced “artificial intelligence” will prove incapable of simulating.
At a very basic level, simply directing our attention—taking responsibility for that to which we give our cognitive energy—we experience the mysterious agency that Simmias’s harmony theory could not account for, and we discipline the very thing that digital media is so eager to distract.
The powers that neuroscience thus finds most elusive are the powers that can protect us from assault. As it ever has been, the central challenge of spiritual discipline is: Are we choosing where to give our attention? Attention is the inner energy of the soul, and when it is sucked away and diverted, the soul falls into acedia, spiritual sloth, a failure of the will to act. A path to the deadly apathy of acedia is the vice of curiositas, which we may call a cognitive intemperance: discharging the energy of the soul’s attention without the discipline of intention.
Digital technology depends on, fosters, and exploits this cognitive intemperance. This is why so many of the books mentioned at the beginning address the problem of distraction. In our natural environment, we have plenty of things to distract us, but also plenty to remind us where we must give our attention and to move us to action. What is unprecedented about the environment of social media is its potentially limitless distraction.
The age of digital media has unleashed a profoundly threatening human experiment. By drawing us to waste not only our time, but our attention, social media seduces us to waste our souls. Our brightest engineers have trained our most powerful technology to act with the psychological craftiness of demons. To protect ourselves from the tempting distractions of technology, we can begin by asking about it and recognizing it for what it is, and by wondering about our nature and remembering who we are. Then, we can go for a walk or read a book; we can philosophize and pray.
This essay is adapted from a lecture developed for campus chapters of the Thomistic Institute. A version was delivered as a 2021 Faith and Reason Lecture for the Newman Centre at the University of Toronto.