Discussion about this post

User's avatar
AKcidentalwriter's avatar

A great article to pause with and deliberate in peace and tranquility because it will be a core issue in everyones life going forward in the present future.

Expand full comment
Marginal Gains's avatar

This is an interesting post. However, can we honestly avoid being changed by our tools? As a quote attributed to Marshall McLuhan says, “We become what we behold. We shape our tools, and then our tools shape us.” I find it hard to imagine that tools like LLMs won’t change us over time. We’re likely to outsource more and more tasks to them in the name of convenience, time-saving, and shortcuts. It will start slowly—just one task here or there—and then become a habit. I can see how getting a quick answer in the name of productivity or even under external pressures from organizations or peers would become essential. After all, most organizations don’t care whether their employees are learning; they care about productivity and output quality.

As you have also said, the critical question for me is whether we’ll trust the outputs of LLMs at face value or apply common sense, intuition, and tacit knowledge to assess whether they are true. My concern is that the more these tools become part of our work and lives, the more we’ll trust them, and the more we’ll outsource our thinking. I can see this happening gradually, as the convenience of these tools makes them indispensable, and we no longer need to question their outputs.

I think the shift will happen in how we approach questions and answers. As I see it, "why" will become far more important than "how." Instead of focusing on how to get the answers or solve a problem, we’ll need to focus on why we trust the output or a solution. But the danger is that we might stop asking "why" altogether. If we rely too heavily on tools that provide answers instantly and effortlessly, we might lose the habit of questioning and critically evaluating the information we’re presented with.

Another point I’ve been considering is how external pressures—whether from organizations or peers—may accelerate this reliance, and people might feel compelled to use these tools uncritically over time to keep up with expectations. That’s why I think it’s so important to encourage a culture of questioning and exploration, even in environments that prioritize speed and output over deeper learning.

Staying "in the loop" as humans will require effort and vigilance. I think we need to treat AI tools like collaborators—not infallible authorities. That means questioning their outputs, cross-checking information, and applying our judgment.

I will end with a quote from Nicholas Carr: “As we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.”

It is a powerful reminder of the cost of over-reliance on tools at the expense of cultivating our minds.

Expand full comment
2 more comments...

No posts