
I’m not here to diss on philosophers. I enjoy reading philosophy and I get a lot out of it. When it comes to big ideas, big-picture frameworks, ideas and imagination, philosophy is great. “I think therefore I am”…. such a brilliant line, and so inspirational!
But when it comes to hard questions that matter in a practical way, we don’t typically call a philosopher to answer. For practical topics in life we have better options…. we have electricians, plumbers, engineers, coders, doctors and sometimes even economists to answer practical questions. Philosophers are confined in some ways by definition to the non-practical. Confined to the bigger-picture if you will.
But occasionally the bigger picture question becomes a very practical question. For example: will AI ever become truly human-like in its capability and intuition?
Is human insight and expression possible with AI?... or are machines destined to remain machines, chock-full of logic and information, by lacking the human range of ideas, imagination, and knowledge?
Since the days of Socrates and Protagoras, philosophers have debated the nature of human knowledge. Does pure knowledge exist as something to independent of the human mind, so that our task is to discover it using our powers of reason? Or is knowledge empirical by its nature, so that knowledge and our experience are built of the same stuff?
The renowned philosopher David Hume was a champion of the latter viewpoint. He was an empiricist…. not just any empiricist but in many respects the empiricist in his time (the 1700s) and ever since. Hume famously posited a paradox: that we cannot use reason to infer future outcomes based on past events. We do infer the future from the past, all the time. But it’s not based on reason; it’s just a kind of patched-together guess from our experience, and nothing else. For example we may watch a few games of billiards, after which we can intuit what will happen when future billiard ball A strikes future billiard ball B at a certain angle. We do it, and we’re pretty good at it. But it’s not a logical exercise based on firm rules. Instead our prediction is merely empirical… it’s just our projection of the future based on what we’ve seen in the past. At the heart of human knowledge, according to David Hume, empirical knowledge is all there is. If you peel the onion of human thought looking for pure reason at its center, you will never find the center. All you will find is layer after layer of experience. Our reams and reams of human experience, obtained over years and years of day-to-day experience, is the foundation of knowledge for Hume.
I’m not sure how many coders, engineers, policymakers or business executives read David Hume these days. I’m guessing not many. But maybe they should. Because what those AI coders are building is Hume’s pure-experience-based thinking machine. After all, what is AI except for reams and reams of data, weighted in a fabric that ties those data points together into a view of reality? Layer upon layer of data; with no rule-based rationalist logic at its center. Sound familiar? If data is experience (and I think these are just two sides of a single coin), then AI works the way our human mind works.
And therefore: If David Hume’s view is correct, it means AI will eventually be capable of behaving in very human ways. As an amateur empiricist myself, that is how I will place my bets. And “eventually” may be in, like, a couple years.
I can’t predict the future … but I can see some blind spots in the current arguments against AI’s capability to be human. The biggest of those blind spots relate to the co-mingling of “AI” and “Large Language Models (LLMs).” These two are related; LLMs such as ChatGPT are a form of AI. But LLMs are not the only form of AI, and not even close. So for example I’ve seen articles where people ask LLMs to do math problems, and find the outcome that LLMs have major blind spots in solving some basic math problems.
Mathematics is the home-turf of the rationalists. In the rationalist view, we humans use logic and reason to solve math problems. Our human minds can’t intuit solutions, except for the most very-basic math (how to count and maybe some basic addition/subtraction). To learn any real mathematics we must employ more than experience. We must employ the rational and logical powers of our human mind. Reading a math book, evaluating a mathematics proof, listening to a lecture and understanding the true correctness of a solution based on logic and reason … these are the ways we learn math! When an LLM makes a simple math error, it’s viewed as a limitation of empiricism. LLMs don’t have rational capabilities and so can’t do math well. Or so goes the rationalist argument: “See? … it can’t generalize solutions to even simple math problems with any reliability.”
True. But highly misleading. After all, when we ask LLMs to do math, we’re employing a tool based on language training to perform a task based on math reality. (‘Language,’ the word, is literally embedded in the name of LLMs!) You might as well prepare your teenager to ace the English portion of the SATs, and then after months of rigorous training in English, give them the math portion of the SAT test. They wouldn’t do well. And they would rightly protest the unfairness of it all.
Put more simply: If we want to ask computers about math, we shouldn’t use an LLM. We should instead use an (easy-to-create) Large Math Model, trained on mathematics rather than on language. Or better still: we could just use a basic calculator function which has existed on your computer since the 1980s.
After all, how hard is it to add a calculator to an LLM? Easy. When the one program can call on the other as needed, suddenly AIs combined powers of mathematics far exceed any human on the planet, at least for 99% of practical math problems. After all, AI can learn more about chess in one hour than grandmasters know after a lifetime. Do we really think AI is doomed to math insufficiency in the long-haul, just because it doesn’t use rational rules-based logic?
Similarly for physics: generational AI creates various images and videos with physically unrealistic representations of the world… rain falling upward, objects passing through each other, and so forth. These failings are taken by some as evidence for the rationalist viewpoint. By this viewpoint, AI lacks the rational thinking necessary to ‘get’ physics. Humans have it and AI lacks it. Indeed: physics knowledge is lacking… in generational AI based on language and image input.
But I can tell you from experience: physics knowledge is absolutely not lacking in the kinds of physics-oriented AI that powers machine vision, autonomous robotics, and self-driving cars. Robotics engineers train AI, not based on reams of language and images, but instead based on reams of physics data: stuff like accelerometer and gyroscope data, videos showing bodies in motion, force sensors, etc. Guess what? After training on physics data, those AI models know physics. Like, really really know physics, so well in fact that PhD level scientists and engineers are sometimes moved out of projects because AI has made them obsolete. Maybe these AIs can’t discern Newton’s third law of motion with the elegance of Newton himself. But they can consistently predict the correct outcome of a physics problems based on their relevant reams of data, which includes reams of information self-consistent to Newton’s third law. If AI can always solve the practical problem, who really cares if it can’t articulate with elegance? (Maybe these physics AIs can reach out to their LLM pals to help them write it more elegantly. My teenagers do that sometimes.)
It's true that elegant solutions found in nature are discovered and distilled into language by humans. AI is not yet ‘elegant’ in the traditional sense. But maybe we need to reconsider the meaning of the word ‘elegance.’ Way back in 2016 I heard a talk from Karl Greb, formerly of Texas Instruments and later at Nvidia. Greb is a leading author of several standards for electronic systems safety, encompassing both traditional software and leading edge self-driving and AI functions….a truly knowledgeable guy and someone to be taken seriously. At this conference 8+ years ago, he gave a summary presentation about an AI-based self-driving prototype using a very simple-to-understand idea: something like “just don’t hit stuff.” It worked, in a basic sense.
Just as many today don’t believe in a human future for AI, most people back then did not believe in the possibility of fully self-driving vehicles. But Greb’s presentation showed the early seeds of exactly that: a computer-driven car, implemented using AI. Like all AI it was at its core enabled, not by formal rational logic, but by empiricism…by zillions of experience-based data points crunched by zillions of GPUs running AI-based code.
Was it elegant? In the words of Greb himself, “Brute force contains an elegance all its own.”
I’m guessing David Hume would agree.