This week, I guest-posted over at EELS, a weekly engine of book recommendations run by my funny and amazingly well-read friend Susan Howson. (Naturally, I urge you to subscribe.) It doesn’t take long to write short blurbs of three books you’ve read and loved, but of course, I imagine it would be far quicker if I asked ChatGPT to do it instead. ChatGPT has consumed books—inhaled them by the millions—even if it hasn’t “loved” any of them in the way we think of that word. Despite that, it would have produced something. (Here is that something, in fact.) Would it still have read well, though? Would it have sounded like me? Would it mean anything at all, to anyone?
In other words, as the strap to my review-essay in the New Republic puts it: “If a computer can write like a person, what does that say about the nature of our own creativity?”
The essay is a reflection on two new books: Who Wrote This? by Naomi S. Baron, and Literary Theory for Robots by Dennis Yi Tenen. It’s also a tiny link to a funny reading story (insofar as these exist). Years ago, in the thick of my newfound admiration for David Shulman, I picked up a book called The Rasa Reader, believing it was by him. I had just finished Vikram Chandra’s Mirrored Mind, which deals with (among other things) ancient Sanskrit grammarians and dramatists, and their theories of aesthetics. I’d also read and re-read Shulman’s SUBLIME (there is no other word for it) essay on Kudiyattam, which I recommend so often that I keep a pdf of it on my phone, ready to send to anyone I mention it to.
On a long train journey, I cracked open The Rasa Reader, primed for prime Shulman. I’d read around a hundred pages—all illuminating, but also denser and harder work than Shulman usually is—before I realised my mistake. While buying the book, in a moment of mental flatulence, I’d mixed up my Indologists: Shulman for Sheldon Pollock. Mortified, I sank back into my train seat—but continued reading, because The Rasa Reader is really all kinds of rewarding and thought-provoking. (More on that in another multi*storied, perhaps.) Anyway, all of which is to say that when I was reading Baron’s and Tenen’s books, I found myself unexpectedly reminded of Pollock’s research and wrote about it in my essay. Would ChatGPT have made that particular connection? I fancy not.
The first third or so of the essay is below, and the full essay is here. If you need a pdf, you know whom to ask: the guy who keeps emergency pdfs on his phone.
The most nauseating, addictive thing about writing is the uncertainty—and I don’t mean the is-anyone-reading? or will-I-make-rent? kind. The uncertainty I’m talking about dogs the very act. This business of writing an essay, for instance: Which of ten thousand possible openings to choose—and how to ignore the sweaty sense that the unseen, unconceptualized ten thousand and first is the real keeper? Which threads to tug at, without knowing where they lead, and which to leave alone? Which ideas to pick up along the way, to fondle and polish and present to an unknown reader? How to know what sentence best comes next, or even what word? A shrewd observer will note that I am complaining about the very essence of writing itself, but that has been the long-held privilege of writers—and they enjoyed it in the secure comfort of their uniqueness. Who else was going to do the writing, if not the writers who grouse about writing?
Now along come these language engines, with suspiciously casual or mythopoeic names like ChatGPT or Bard, that suffer not an iota of writerly uncertainty. In what can only be called acts of emesis, they can pour out user manuals, short stories, college essays, sonnets, screenplays, propaganda, or op-eds within seconds of being requested for them. Already, as Naomi S. Baron points out in her book Who Wrote This?, readers aren’t always able to tell if a slab of text came out of a human torturing herself over syntax or a machine’s frictionless innards. (William Blake, it turns out, sounds human, but Gertrude Stein does not.) This unsettles Baron, a linguist who has been writing about the fate of reading for decades now. And it appears to be no lasting consolation that, in some tests, people still correctly recognize an author as artificial. Inexorably, version after version, the AIs will improve. At some point, we must presume, they will so thoroughly master Blakean scansion and a chorus of other voices that their output—the mechanistic term is only appropriate—will feel indistinguishable from ours.
Naturally, this perplexes us. If a computer can write like a person, what does that say about the nature of our own creativity? What, if anything, sets us apart? And if AI does indeed supplant human writing, what will humans—both readers and writers—lose? The stakes feel tremendous, dwarfing any previous wave of automation. Written expression changed us as a civilization; we recognize that so well that we use the invention of writing to demarcate the past into prehistory and history. The erosion of writing promises to be equally momentous.
***
In an abysmally simplified way, leaving out all mentions of vector spaces and transformer architecture, here’s how a modern large language model, or LLM, works. Since the LLM hasn’t been out on the streets to see cars halting at traffic signals, it cannot latch on to any experiential truth in the sentence, “The BMW stopped at the traffic light.” But it has been fed reams and reams of written material—300 billion words, in the case of ChatGPT 3.5—and trained to notice patterns. It has also been programmed to play a silent mathematical game, trying to predict the next word in a sentence of a source text, and either correcting or reinforcing its guesses as it progresses through the text. If the LLM plays the game long enough, over 300 billion or so words, it simulates something like understanding for itself: enough to determine that a BMW is a kind of car, that “traffic light” is a synonym for “traffic signal,” and that the sentence is more correct, as far the real world goes, than “The BMW danced at the traffic light.” Using the same prediction algorithms, the LLM spits out plausible sentences of its own—the words or phrases or ideas chosen based on how frequently they occur near one another in its corpus. Everything is pattern-matching. Everything—even poetry—is mathematics.
We still don’t know precisely how humans grasp language, although it isn’t the LLM way; no infant that I know of consumed 300 billion words before saying “Mama.” But in his slim new book, Literary Theory for Robots, Dennis Yi Tenen, an associate professor of English at Columbia University, proposes that the way we use language to create works bears some similarities to the machines. “Thinking and writing happen through time, in dialogue with a crowd,” Tenen maintains. “Paradoxically, we create new art by imitating and riffing off each other.” Subconsciously or otherwise, a writer milks inspiration out of libraries and conversations, and draws assistance from dictionaries, thesauruses, and style guides. “We think with our bodies, with tools, with texts, within environments, and with other people.” A writer relies in less calculating fashion on the books she has ingested than an AI does, but they’ve made her into a writer all the same. It was always an error, Tenen writes, “to imagine intelligence in a vat of private exceptional achievement”—to buy into the fable of the writer in her lonely garret, manufacturing words and ideas de novo.
In this notion of distributed intelligence, there is something both democratizing and destabilizing—a sneaky but egalitarian mode of murdering the author. Tenen insists, though, that we shouldn’t agonize too much over the source of intelligence. Who cares if our thinking is closer to the synthesis of LLMs, rather than the divinely ordained originality held dear by the Romantics, as long as we have an effect upon the world? Certainly not Aristotle. “In the Aristotelian model,” Tenen writes, “intelligence is the GOAL of thought.” (The caps lock letters are Tenen’s, not mine or Aristotle’s.) It’s Plato who held intelligence to lie within the department of the interior—a private, nebulous thing that occasionally led to enlightenment. Pick your philosopher.
Even at the summit of literary creation, fiction writers yielded to the seeming inevitability of recombination. Tenen’s potted history of authorial hacks, the richest section of his book, begins with Georges Polti, an enterprising Frenchman who in 1895 published a book called The Thirty-Six Dramatic Situations, to help dramatists write new plays. Once you’d eliminated supplication, deliverance, vengeance, pursuit, disaster, revolt, and the other 30 symptoms of the human condition, he implied, what else was left? (Polti wasn’t afraid to get specific: Among the subtypes of the “pursuit” situation were “pursuit for a fault of love” and “a pseudo-madman struggling against an Iago-like alienist.”) “They will accuse me of killing imagination,” Polti wrote, but in fact, his primer aspired to free playwrights from the pursuit of mere novelty, so they could devote themselves to truth and beauty. Mark Twain invented a self-gumming scrapbook for authors, into which they might paste notes, newspaper snippets, and images, for subsequent inspiration. (His secretary once filled six scrapbooks with clips about the Tichborne trial in London, involving a no-name butcher who claimed the title to an English peerage. Twain concluded that the tale was too wild to be of use to a “fiction artist”—but it did form the basis of Zadie Smith’s latest novel, The Fraud.) Companies sold devices like the Chautauqua Literary File and the Phillips Automatic Plot File Collector, into which writers stuffed their reference materials, so that they could later pluck out a setting, a character, or the seed of a plot. It was ever thus, Tenen implies—the magpie approach to thinking, the collage as the modus operandi of writing. Why are we unnerved by LLMs following those same principles?