Just last week I started reading the French author Annie Ernaux, winner of the 2022 Nobel Prize for Literature, whose memoir / autobiography / autofiction work makes no mention of computer intelligence but in its existence has gifted me the courage to piece together the disparate notes I have been collecting on AI. I am only halfway through The Years, Ernaux’s magnum opus, so this is by no means a review, but reading the 82-year old’s excavations of moments from her life had reminded me of the bilateral ontology between AI and literature. That is, not only can the emergence of AI reveal to us what it means to write, but also that the work of writers is crucial in informing the ways we read texts and images generated by AI.
Among the fragments of memories Ernaux describes in The Years, one involves the liberation of relapsing to the “original tongue” at home from the correct grammar of French she had to recite at school:
“Then as soon as we got home, without a second thought, we reverted to the original tongue, which didn’t force us to think about words but only things to say and not to say — the language that clung to the body, was linked to slaps in the face, the Javel water smell of work coats, baked apples all winter long, the sound of piss in the night bucket, and the parents’ snoring.”
It is difficult to articulate what is so great about Ernaux’s writing or even what it is in the face of its mastery because the sheer relentlessness in which she gives the reader fragments of moments, events, images, objects, and quotes from her memory of things past makes it hard to pin the subject of her writing down to anything but the generality, and simultaneously, the specificity of none other than her own life. Her writing, as the work of great writers do, reminds me how we take for granted the transcription of experience into words because she does it with such acuity and affect. We think we all do this, but to deploy words — to articulate — with the purest sincerity is an act combining the visceral and the cerebral. The very act of articulation overcomes the cartesian split: meaning and matter are enchained, a point I recall the philosopher Michael Marder made in a presentation he gave in 2020. Marder was reflecting on dust as a gateway to understanding the correlation between words and worlds — a fascinating method and I encourage a listen, but here I’ll highlight the first proposition he made:
“Worlds are inconceivable without words. Words are impenetrable without worlds.”
Marder expands on this by suggesting that some acts of translation involve nothing but the dust of worlds and of words. In fact, he explains that this is how non-human agents interpret their worlds without connecting them to words. Applying this to AI as non-human agents, meaning and sense arise from repetition that is generalised (interpolated) from patterns of vibrations (statistical regularities in text) or spatial configurations (proximity and sequence of bits).
It is therefore pointless to anthropomorphise AI, both in design and perception. Even as we consider AI to contain its own agency, the body with which each model moves through its world — its own version of the internet — clings on to language in ways that are constantly being developed and even understood.
Op-eds by a number of top journalists have been quick to dismiss LLMs for being hallucinatory liars or petulant gaslighters when they are not being uncreative or over-diplomatic. And while a lot of the responses from Bing’s chatbot or the original ChatGPT do indeed come off that way, the voices of these AI critics are less critical than they are incredulous and fear-mongering. On the other hand, this type of interaction and provocation is to be expected from the ‘chat’ format that LLMs are being packaged for us to interface with. We have to ask: since we are still in the beta phases of accessing the technology, is the question-and-answer format one that helps us understand how AI functions, how machines learn, and what we can do with it? How can we invoke AI’s non-human writerliness and make sensible tools out of it?
We could be a little more creative (should I say ‘generative’?) in understanding AI than trying to fit — and limit — it in the shoes of artists, architects, graphic designers, doctors, therapists, the list goes on. My friend, the artist and technologist Ruben Ramos Balsa likes to defer the metaphor to Pinocchio. While lifelike and animated with the curiosity and roguishness like a boy of his physical appearance, the only part of Pinocchio that grows is his nose, when he lies — tricking Geppetto, his master. The only living matter in Pinocchio’s body is also ironically what betrays his semblance of a real boy.
Incorporating natural language processing abilities to programs may be useful in ways we have yet to imagine, but designing machines to talk, write, and move like humans — its final trick — whether in function or in interface, is the least human thing to do with the technology. Neither are we overcoming anthropocentric idealism if we are synthesising non-humans in the model of the ‘perfect’ human, or at least in the model of humans performing services perfectly, whatever they may be. The paradox that we need to disentangle is in designing for the human condition without falling prey to the tendency towards human exceptionalism. I believe that especially when it comes to LLMs, this is an opportunity to dig deeper into what it means to articulate, whether it is the outcome or process of connection between words and worlds, or like Marder speculates, between their dusts. We are already witnessing the diminished patience to read words. As we scale out products that have the ability to produce infinite content, the least we could do is to learn to read, once again.
The world is so big,
Jing
❍ ❍ ❍
There is just so much out there being written on AI right now. I thought I’d share a list (to date) of writings that have been helpful to me:
AI Reveals the Most Human Parts of Writing, Katy Ilonka Gero. / Link.
Ted Chiang’s analogy of LLMs as Xerox photocopying machines is very helpful in understanding what “generative” means. Note: Chiang used to write technical manuals alongside sci-fi books, i.e. he knows what he’s talking about. / Link.
What LLMs can and cannot do, according to computational linguist Emily M. Bender. / Link.
A team of researchers at Epoch AI published a paper in October 2022 predicting that ChatGPT and other programs running on LLMs will run out of high-quality reading material by 2027. / Link.
The Model is The Message, Benjamin Bratton and Blaise Agüera Y Arcas. / Link.
Nothing on AI but if I have convinced you the work of writers is important,
shared his approach(es) and process to writing in . It is a beautiful, comforting read. Interesting to note this portion of his Substack is where he answers questions sent in by readers, which are both always heartwarming in their sincerity. / Link.Annie Ernaux is often compared to Marcel Proust but I find her writing reminds me much more strongly of Susan Sontag, whom I had initially included in this piece because she’s another grand example of what a writer is, but edited it out for focus, and also Brian Dillon does a much better job than I could have. / Link.
Musician and writer
asked ChatGPT to write a song that he and his bandmate supposedly would have written, then realised it produced what the A&R people wanted from them but couldn’t get: “the AI bot version of us, defined by our interests but filled with generic content”. / Link.
🤍🤍🤍