The AI+Philosophy Series
Introducing a 4-part series devoted to some big-picture questions raised by AI
This is the first of a four-part series discussing big-picture questions posed by AI. The series aims to answer relevant questions on the future of AI, using the insights of major philosophers as a guide. The aim of these articles is to provide practical, relevant guidance for generalists. Meaning: for everyone.
Let me guess: AI is in your feed. I knew it. On social media and substack and on the internet generally, it feels like AI is ever-present. Like it or not, it’s hard to escape the current trend: AI everything everywhere, all at once.
I’ve seen cycles of hype before. We’re in one now. But that doesn’t mean AI will under-deliver in the long run. Many of us remember the dawn of the internet, the emergence of the smart-phone, and the rise of e-commerce … all overhyped at first, and then pervasive world-changing paradigms soon thereafter. AI is next up. And it’s gonna be big. (For a bit of this perspective, check out this article).
What does it all mean? I’m not asking if Deepseek is better than ChatGPT, or if the US truly leads China by a large margin, or if Nvidia is worth a few billion dollars more or less. All good questions. But all too small. We need bigger questions.
When questions get big, the answers get philosophical. With the advent of AI, big philosophy questions are increasingly necessary to our real-world understanding.

We face some roadblocks on our way to enlightenment. Philosophy can be frankly boring. Metaphysics, the study of philosophical first principles, has a special reputation for being unintelligible. Voltaire, no lightweight himself, once explained that “When one man speaks to another man who doesn’t understand him, and the man who’s speaking doesn’t understand himself: that is metaphysics.”
Following Voltaire, many intelligent people have been educated for years without studying metaphysics at all. Anecdotally, no one seems to miss it. But suddenly, we need it. AI creates the need for it. We can’t get our heads around an AI-driven future unless we’re willing to wade into metaphysical questions.
Although I believe that last sentence, I can’t really prove it. I don’t have the skill. But I can do the next best thing, which is to demonstrate what I mean with some real-world examples. Below in the remainder of this article are three big philosophical questions, each of which I’ll dig into in future essays over the next few weeks. A few years ago these would have felt like academic side-show questions, maybe interesting in theory but with little practical application. But with the emergence of AI, each of these questions demands an answer… not just for philosophy doctoral students, but for regular people trying to live their daily lives.
AI Philosophy Question #1: Are consciousness and complex knowledge exclusively the province of the human mind?
I hope this first question demonstrates my point.
As a kid I watched sci-fi movies with computers that became sentient, and then did things like take over the spaceship (2001 - A Space Odyssey), or take over the world (Terminator, The Matrix). What a scary idea! Back then we were reminded by our parents that it wasn’t real. It was just a movie.
Today that reminder has been rescinded. A thinking, self-aware AI program may quite possibly emerge in the next few years. Don’t believe it? AI is already rapidly exceeding human capabilities in a wide variety of domains. It’s simulating realistic (and sometimes unsavory) human behaviors in ways that earn its human creators millions of dollars. It’s walking, driving around, writing software, and discussing philosophy in its spare time. What more does AI have to do before we consider the possibility that it could become a conscious, sentient, thinking being?
I don’t say it absolutely will happen. Just that it’s could happen. And if it’s possible, then we need to sharpen our minds to the possibility of sentient AI, and prepare for the both the opportunities and risks. The next article in this series will provide some basic philosophy insights on what to expect, and how to manage expectations.
AI Philosophy Question #2: What is the nature of truth? Is it fixed or relative?
As a kid, when I wasn’t watching sci-fi movies with sentient AI, I was watching crime dramas on TV. And those crime drama detectives were always searching for the same thing: hard evidence.
But these days ‘hard evidence’ is easily faked by AI. Image, voice, and video information can be generated to represent any person, doing any thing. As of today these “deepfakes” are very good but not yet perfect. But perfection is just around the corner. Deepfakes are already exposing and enabling the darker corners of human behavior. And faked content is as easy to create as real content… in fact maybe easier. When fakes and reality co-mingle pervasively, the truth comes under pressure. AI will increase that pressure. A lot.
The means by which we decipher truth from untruth, and the degree to which truth can be relative (or cannot be relative) have been long debated by philosophers. The ancient Greeks debated the topic of truth and relativism. The language-oriented philosophers of recent decades also waded into this area. I’ll review some of their thinking, and attempt to derive practical advice for the AI world, in a future article.

AI Philosophy Question #3: What is the nature of cause and effect?… and does it matter?
Causation was a hot topic in 18th century philosophical circles. Hume and Kant debated it. Newton, Leibniz and Galileo all relied on it. Everyone felt they had a grasp of what it was. But upon further review, causes and effects are often slippery.
The discussion of cause and effect pre-dates the emergence of AI. But AI distorts and darkens the cause-effect paradigm. AI does stuff… and there’s no exact reason “why.” It just does. We can describe AI outcomes with high-level phrases, like “the AI’s training data led to that outcome” or “the AI is just regurgitating what it’s seen before.” But we can’t get precise about cause. If and when AI gets a big decision wrong, we’re very limited in our ability to explain “why?”
Actually some philosophers argue that “causes” are not 100% real, metaphysically speaking. But the idea of cause-and-effect seems very real to our human minds. And that matters. In order to live side-by-side with AI, we’ll need some consensus on what it means to cause an outcome. Warning: AI may have its own opinion on this, and many human thinkers may disagree. But let’s at least start the discussion.

In coming weeks I’ll post three new essays in this series; one apiece for each question listed here. These will be in a philosophical vein, with some reference to major philosophers and some direct applications of philosophy to current AI topics. Most importantly these will be practical essays, written for (and by) non-philosophers in simple language. I don’t know about you, but I don’t have time to go back to school for a philosophy degree. I need practical advice today, on how to live my life tomorrow. That’s what I’ll hope to develop in coming articles. I hope you’ll read, comment, engage and enjoy.



