
At this point, it is a cycle, some form of repeated and exhausting pattern that society at large finds itself endlessly, fruitlessly, duplicating. This generalized anxiety surrounding the written word, the power that it holds to shape our understanding of the world—it’s a cliché, now; didn’t Plato want to cast the poets out of his ideal republic? I’ve heard that one before. More so than any other art form or communication technology, writing is democratic, available to anyone with a pen and paper or, more recently, the blinking cursor of a computer screen and its attendant keyboard.
Which is precisely why entire swaths of the population seem, as of late, so eager to do away with reading and writing altogether. A literate populace is a thinking populace, and this worries conservative powers-that-be, politicians and tech leaders alike. And so, for what feels like the umpteenth iteration in a very small span of time, American media, publishing, and the arts are being undercut in yet another variation of the infamous “pivot to video” push that this time is being fueled by the fantasy that is A.I.
I’m old enough—I’m in my thirties—and have had enough jobs in audience development and engagement at various media entities to feel simultaneously perturbed and jaded by this newest version of the digital panic. There is no question that American media and publishing, as industries, moved too slowly when it came to adapting to the internet; over the weekend, I read the media reporter Michael M. Grynbaum’s new Empire of the Elite: Inside Condé Nast, the Media Dynasty That Shaped America, and the chapters towards the end detailing the company’s total failure to take the internet seriously are both frustrating to read and yet not particularly surprising, from my own memories of working there in social media in 2017 and 2018. But is also true that Silicon Valley, in its quest to dominate, has in fact fully lived-out its “move fast and break things” catch phrase, leaving a fundamentally broken industry—and along with that, the negative implications that this holds for American intellectual life and democracy—in its wake.
When I was in college, the fear was about ebooks. When I was in my mid-twenties, in 2017 and 2018, Facebook’s algorithm shifted towards rewarding video and suppressing text and link posts, despite, as the Wall Street Journal reported in 2018, the company massaging the numbers and reporting the misleading data to publishers. Twitter—I can’t bring myself to call it X—has recently begun suppressing links, ensuring that fewer users read the work writers and publications use the site to link to. Last week, I read the latest Pew report detailing the behaviors of Google users following the company’s introduction of A.I.-generated summaries at the top of the search page. Unsurprisingly, it’s bad news: “users who encountered an A.I. summary clicked on a traditional search result link in 8 percent of all visits.” In comparison, users without an A.I. summary had a clickthrough rate of 15 percent, nearly twice as high. You might think this is foolish on the part of these companies—how is Google not completely cannibalizing its own business model?—and yet, thinking about the larger media ecosystem that it inhabits doesn’t seem to be much cause for concern for the company: In 2023, after the passing of a Canadian law that would force tech companies to pay publishers for content—thereby mitigating the loss in revenue that tech has caused the industry—Google responded by blocking all links to Canadian news articles.
A loss in traffic and readerships means a loss in subscriptions. With fewer readers, outlets will continue to close. There is a ripple effect for all writers, even those who are not really journalists—it impacts all of us. “It’s a tough time in the industry,” is the current standard publishing refrain, one I’ve heard from multiple editors and agents—my own and others—over the last year-and-a-half. There are fewer than seven full-time staff book critics in the United States, a viral tweet in the spring noted, a consequence of the death of local news and newspapers. The New York Times recently announced it is attempting to “expand” their arts coverage “beyond the traditional review,” meaning, yes, video. Do readers actually want this? (My guess is not really, considering the amount of click behavior and analytics I’ve poured over during the years at different publications, but when the platforms control what kinds of content gets elevated by the algorithm, there’s only so much the publication can do.) There is no mid-list anymore, a wise friend told me a few months ago as we discussed the state of publishing. More so than MFAs, translators, or the literary tastes of women, it is the simultaneous hollowing out of those twinned industries, media and publishing, paired with the last forty years of their corporate conglomeration, that has impacted what does and does not get published these days.
And yet I remain fascinated by and in debt to my computer, seduced by the promise of the wild, weird internet I grew up on in the early 2000s—a less-corporate version that no longer seems to exist. My story is a common millennial one; I had a complete lack of adult interference when it came to the internet, and so I saw and experienced things that both frightened me and expanded my world view. I wouldn’t have read nearly half the books I did by international authors if it wasn’t for all of the poetry and translation Livejournal and later Tumblr accounts I stumbled across in high school and in college, which changed my own approach to the English language and its capabilities. I’m grateful for this, though sometimes I wonder if this process can, or should, be replicated.
I wrote a novel called The Table-rappers, which I’d been working on and off since 2015, following an article I wrote back then for Aeon about the rise of “creepypasta” on the internet. In it, a computer scientist in 2014 returns to his hometown of Boston and gets drawn into a series of seances through an old college friend, encountering a medium whom he becomes obsessed with revealing to be a fraud. It’s based loosely on the relationship between Harry Houdini and the medium Mina Crandon in the 1920s, and as I wrote it, I found myself growing increasingly fascinated by the idea of the computer scientist as a figure of cultural relevance. I wanted to get inside this head, to peek into the mindset of someone who really believed in computers—who, unlike myself and most people I spend my time with, considered the invention of the computer to overall be a moral good, rather than a potential Faustian bargain. I read a lot of programmer forums, as well as older interviews that the sociologist Sherry Turkle conducted with MIT undergraduates in the eighties. Their sense of awkward optimism was at once refreshing and naïve. Again and again, I encountered the idea that the computer and the digital self it offered could be soothing, the straightforward binaries from which it is composed a comfort when compared to the real world’s troublesome ambiguities. It is an idea that I am sympathetic to; that the flattened world inside our screens offers a sense of safety. That it’s easier to parse.
But what literature relies on is its opposite. A novel cannot be black-and-white; it depends on shadow and contrast, shades of grey: the ambiguities inherent to human relationships. “The problem was twofold,” I wrote early on, as the character thinks to himself about his relationship with his machines as he walks along the Charles River.
In the flesh world, the office and my sister’s house and even here, the empty bridge, there were ambiguities. There were double-entendres and hidden meanings, people who never fully believed what they said but instead presented themselves and their ideas archly, artificially, I always felt that I was somehow the butt of the joke. Life was easier on the screen, freed from word associations, references, and jokes. On a computer perfection was possible, even achievable. There was always an answer, always a clearly-delineated path, a binary; you were right or you were wrong. Zeroes and ones. I typed them out endlessly, the numbers that I used to code, and their languages became a part of my body, too. At last I was a part of something, I belonged to something. I was inside of something—because that’s what it feels like, when you’re really in the zone. That’s what we called it, the zone, like something out of science fiction.
… A computer doesn’t need human relationships. A computer doesn’t get bored, or sad, or angry; it’s not scared about its future. And we knew this. The others like me. Knew it and envied it. Those of us who inhabited this secret world, who had come of age with it, those of us who trusted chips, processors, hard drives, the very existence of the cloud over the weakness of human memory. What we knew deep within ourselves was that the mind was a machine. Not like a machine. We rejected all suggestions of metaphor, we refused to consider anything outside of a binary. Every part of the brain, we believed, could be easily mapped out by some iteration of digital or analogue logic, these were the same forms of logic that we and the generations previous to us, the generations that had invented telephones and fax machines and, yes, the atom bomb, had used to build the machines we now surrounded ourselves with, it was the same logic that we humans had once used to invent math itself. It was exhilarating, we found all of this exhilarating. For hundreds of years, humans had been working towards a shared goal—knowledge—and we were on the precipice of reaping its rewards. Our computers were the answer; we could create the world fully in our image. We are our own gods.
In writing this, I realized that these are also many of the same feelings I experience when I am on my computer. Online-life provides something of a paradox; though you are physically alone, you think of yourself as surrounded by other people, even though all you are really interacting with are avatars, pictures, words. Similarly, the idea of the body so prevalent today—that it is a machine—is an attractive one, because it in some ways reduces the body, and removes the mystery from it. If a machine breaks, you can fix it. You can understand it. This is not necessarily true of humans.
The idea that something as complex and frightening as human consciousness can be transposed into machine learning—what a fantasy! What an appealing, soothing fantasy, the idea that we can replace ourselves. Alas: the ideas that the computers come up with are stilted, the words they string together uninteresting at best. Instead, in this dream of machinery, of whirring and twisting gears—densely-twining lines of codes creating algorithms—turning and turning, the ever-widening gyre—in this endless pivot away from what might most define us as human, we are, more and more, being confronted with our own obscurities.
Parting gifts
- If you’re curious in reading more about the history of artificial intelligence and the mystery of human consciousness, I cannot recommend Meghan O’Gieblyn’s 2022 book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning enough.
- I once wrote an essay about social media management and the “business of boredom” in relation to Henry James’s novella In the Cage, following a telegram operator in early twentieth century London, for The Baffler.
- Or you might enjoy my short story “Pornography,” published last summer in the print quarterly of The Los Angeles Review of Books, in which a woman crawls through a computer screen in order to try and hold a conversation with the man watching her on the other side.