Background: Coming Home to a Dog, and the Chinese Room
I’ve been told that AI can never understand a dog. An AI chatbot can converse about a dog and describe one… but AI can’t experience the affection nor the loyalty of a dog, nor how it feels to come home to one. I agree with this argument when it comes to dogs; and cats and turtles and many other human-experienced objects. But it’s a badly misleading argument when used to posit what AI can or can’t do with it’s limited “understanding” of things. To discuss what’s wrong with this understanding-a-dog framework, let us return to a famous (and maybe over-exposed) argument against AI understanding.
The “Chinese Room” is a well-known argument from philosopher John Searle circa 1980, arguing against what was then called ‘Strong AI.’ I can’t do justice to the original but I will summarize briefly. Imagine an English speaker with zero knowledge of Chinese language. Then imagine this person is placed in a room, into which two strings of Chinese characters are introduced. The first string is a story in Chinese; the second string is a question about the story. The English speaker knows nothing of the symbols on their own, and hasn’t been told what they mean.
Now assume the person is also given an instruction set, written in English, which allows him to relate the first string to the second string, based on the symbols alone, and correctly produce a third string of Chinese characters. Importantly, the English instructions provide no translations or semantic descriptions of any characters…. so the reader can’t decipher the meaning of any Chinese symbol. But by following these dense and complex instructions, the person can correctly assemble a third string of Chinese symbols which we’ll call an “answer,” which correctly responds to the questions posed.
The question Searle poses is: does the speaker know Chinese? Searle believes it’s fairly obvious the answer is no. This despite the illusion of knowledge that would be visible to observers outside the room. After all, they asked questions and got good answers, all in Chinese. But the speaker in Searle’s argument knows nothing at all of Chinese. The Chinese characters in the room become stripped of all objective meaning and become pure syntax. In philosophical terms they are no longer grounded to human experience; nor indeed grounded to anything.
Exercise #1: Insert an Apple
Let’s use a short example of an apple, to show what the Chinese Room Argument tells us. The Chinese symbol for Apple (苹), written by someone who understands Chinese, has objective meaning. And it has similar objective meaning for a Chinese reader. Both the reader and writer can imagine an apple; how it looks, how it feels in your hand, how it tastes when you bite it.
The single symbol, ‘Apple’ carries this richness along with itself, for both the Chinese writer and reader. But for the English speaker knowing no Chinese, Searle would point out that the Chinese symbol 苹 is merely a symbol and nothing more. Searle posits the difference between a pure symbol of ‘Apple’ and the experience-informed idea of ‘Apple’. We can all probably agree there’s a big difference between the two.
But many things aren’t like apples.
Exercise #2: Insert a Complier
The Chinese symbol for compiler (actually three symbols together, 编译器), written by someone who understands Chinese, has an objective meaning. Similarly for a Chinese reader. But for the majority of readers in any language, the experience of a compiler is indirect by it’s nature.
A compiler is a construct in computing and software; a translator of sorts. Most people come across the idea of compilers when they learn to code, or in some context of computing. Only actual software programmers know how to use a compiler. But even for coders, the use of a compiler is fairly automatic and trivial; a step something like “select ‘compile’ to compile the code.”
So even for most of those with experience of a compiler, the word “compiler” is mostly an abstract idea, a mere stepping stone between writing the code and running the code. Only a select few computer scientists, a sliver-within-a-sliver of individuals, ever wrote a compiler, or know the nuance of how one works.
In short: a compiler is not an apple. A compiler has no look nor shape nor taste nor scent. A compiler is purely divorced from our complex sense impressions. For the large majority of us, the word “compiler” isn’t grounded in any experience whatsoever… it’s just a placeholder for an abstraction.
If we really care what a compiler is, we can look it up on Wikipedia. But when we do, what do we find? Just a bunch of references to other abstractions! We find references to compute languages and logic structures and memory access and other fairly lifeless stuff. It seems the compiler is by it’s nature a highly abstracted thing; a thing without shape or smell or taste; a thing which can only be understood in the context of other lifeless/shapeless/tasteless things.
And here’s where the practical diverges from the academic. Searle would say that the Chinese symbol for compiler 编译器 has no inherent meaning to an English speaker, because it lacks ‘grounding…’ grounding to human experience, to sense perceptions, to other words or ideas which are themselves informed by sense impressions. But even for a Chinese speaker, the word “编译器” isn’t grounded to human experience, and neither are the computing constructs used to define and describe “编译器”. The idea of a compiler is referential only to other abstracted ideas and words… not to direct sensory experience and seldom even to indirect sensory experience.
For a compiler unlike for an apple, we must understand a thing which offers no sense perceptions. No taste or smell or touch is available to us. Instead we learn about this abstraction (compiler) based solely on abstract relation to other abstractions (compute functions, logic functions, memory registers, etc.). This sensory-free relationship of one abstraction to another may not quite be Searle’s “Chinese-to-Chinese dictionary” which was supposedly useless to an English-only speaker. But it’s not too far off.
So the compiler is not grounded in sense perceptions the way the experience of an apple is grounded. And it doesn’t matter. I don’t need rich sense impressions to understand what a compiler is. I have my mental model; and I build that up over time as I understand more about “compiler;” what it is and what it does. Just give me the instruction manual and I’m good.
Apples, Compilers, and the Future of Work
For better or worse, the workday of a modern knowledge worker is mostly devoid of apples. Instead, our working world overflows with things like compilers. Things like spreadsheets. Router configurations. Network data. Scripts to process network data. And so forth.
Searle’s conclusion is that the Chinese Room is not thinking; it’s merely processing information. Maybe so. But isn’t that what we’re all doing most of the time? A large portion of our working world… our day to day thinking, discussion, and human engagement in the workplace, is about abstracted concepts which are not strongly grounded in any fundamental way. So a dominant portion of what we do with our brains is to intelligently associate one abstracted concept to another.
In an academic sense, I agree we need a bit of meaning from grounded experiential things, even though we converse in abstractions. Sooner or later I need grounding to understand any semantics. So no, I’m not saying that I can learn about the world from a Chinese-to-Chinese dictionary; or learn only from Wikipedia and then understand the whole of computer science.
But I am saying that we over-value unique human experience when we argue about things AI can never comprehend. Examples like apples and dogs and cats and other life-affirming stuff; these examples are all misleading, because they’re too grounded. The day-to-day work of knowledge industries is all about how one abstraction relates to another abstraction. If that dialogue is “grounded,” it’s only marginally so.
And that’s one reason LLMs seem so good at human conversation. Sure, the full experience of an apple it may require the uniquely-biological richness of our evolved human brains, which ChatGPT lacks. But mostly our brains aren’t spending the day meditating on the richness of nature. Most of our time is spent getting things done. And you can get a lot of things done, including lots of value-added interaction in text format with other human beings, without such richness. Just like an LLM does. It’s not Shakespeare or Dostoyevsky, and maybe it’s not truly intelligent in a formal sense. But it works.
Does this suggest that AI will steal your job in the future? Probably.
But look on the bright side: maybe more AI means we’ll get to spend more time at work thinking about the interesting sense-dominated stuff. Maybe the future of human work will tend toward apples and oranges and lemons, while AI worries about the spreadsheets and compilers and data processing scripts. Let us work exclusively with the good stuff of life, and let AI deal with the lifeless stuff of work. If things go well who knows, someday this Substack may evolve into articles about sensory tangible things. Or poetry even. That will be fine with me.