Yes, LLMs Will Be Transformative
Not because they're sexy. Just because they work.
With dissatisfaction surrounding the recently-released GPT-5, and a re-assessment of AGI among tech-utopia types, it’s worth our time to summarize an oft-criticized opinion. LLMs will be transformative, in the near-term, particularly for the professional working world. Here’s why.
LLM Realities
With roughly a billion monthly users, and a high rate of adoption within large enterprise workflows, major LLMs are quickly becoming omnipresent. LLMs exist, they’re on your digital devices and they await your prompt. I think no one can argue against their market momentum on a pure downloads-and-usage basis.
But are all those LLMs doing anything useful? Are they poised to have a major impact? It seems there’s an active debate on the topic.
Some view LLMs as essentially low-impact because of their known limitations. Lack of reasoning is one such limitation. Lack of grounding or true “knowledge,'“ with LLM statements and assertions disconnected from any world models, is another such limitation. I saw one writer show skepticism to LLMs because they had yet to show “lasting gains in understanding, invention, or human welfare.” I mostly agree that LLMs haven’t shown those things.
But is that really the bar?
What does it mean to be transformative? Are technologies void of transformative power if they can’t show gains in understanding; or don’t contribute to invention, or don’t deliver human welfare? I wonder how we would consider technologies like the gun, the steam engine, or the internet when measured against such a measuring stick?
Transformation is about change. And LLMs are dramatically changing the way we do things, particularly in our professional lives.
I can observe a kind of split screen on these topics. One one screen are a crew of experts, mostly academic in nature and definitely angry at tech over-hype, declaring that LLMs cannot be very impactful because they can never provide human-level intelligence. On the other screen are people performing their day job using pervasive LLM technology that didn’t exist 2-3 years ago. These are LLMs that write code, and suggest outlines, and review documents, and create content. These are LLMs that write; and in fact write a lot of practical documents far better than the average knowledge worker can do. This isn’t AGI, it’s not a reasoning engine and it’s not Shakespeare. But it works. And because it works, people use it. A lot.
The Act of Writing and Human Nature
Let’s get real about the act of writing. By which I mean: let’s remember that most people hate to write, mostly because they suck at it. Writing may be an elucidating and enjoyable art for 1% of the population, but it’s also a terrifying, mind-numbing time-suck for the other 99%. A majority of students simply don’t write anything anymore…not for history class, not for English class, not for any purpose. LLMs write for them, better than they can write for themselves. LLMs enable anyone to provide well-articulated professional prose on demand, which most people simply can’t do on their own.
(Yes, the advent of LLM-dominated writing comes with some major downsides, such as a generation of students who never learned to write. It’s definitely a scary idea. Note I didn’t say LLMs would be “transformative and good.”)
This topic, writing with LLMs, illustrates a pervasive mistake made by the AI-skeptic class of writers. Their mistake is believing that transformative AI requires, as a pre-requisite, things like complex reasoning, conceptual breakthroughs, and grounded judgments. It’s just not true. The vast majority of people, in the vast majority of settings, are not engaged in complex reasonings or making conceptual breakthroughs. (And don’t even get me started on “making grounded judgments”.) Professors and think-tank-researchers aside, this is not how real human people spend their time.
If you write about big ideas for a living, your job is probably safe from the coming onslaught of AI. But the vast majority of knowledge workers spend a most of their time not performing any of these high-minded feats of logic. What are we doing instead?
We’re writing emails to communicate basic information, and reading emails others wrote for the same purpose.
We’re updating today’s spreadsheet to make sure it fits on yesterday’s slide.
We’re refactoring shitty code from ten years ago that no one took the time to document.
We’re trying to get 20 people to decide what they want for lunch, in time to order it before noon.
…and a million other semi-repetitive tasks in the same vein.
If you know the drudgery of work, you get it. If you don’t, I applaud you. Keep living in a world dominated by human reason. When I stack up enough AI functions to do my day job for me, I’ll join you there and savor the view. But when I get there, don’t bother telling me how un-transformative it all was. I’ll know better.
Doubters and their Mental Models
LLMs will be transformative. It’s not because they match human intelligence in all dimensions. (They don’t, and won’t.) It’s because LLMs work well enough to dramatically improve on human performance in a lot of real-world tasks. That’s not sexy but it is transformative, in much the same way the automobile was transformative. An automobile doesn’t really do anything that a horse and cart can’t do. The automobile just does it a lot better. In the same way, AI does the bulk of knowledge work a lot better than most humans do. LLMs transform work by simply doing the work we suck at, or can’t do, or don’t want to do.
So why is it so hard to say this out loud?
I know one reason. We hate The Hype. We should all recognize the tech bro ethos is really annoying, what with its predictions of near-term AGI, and its deceitful optimism about an AI-enabled utopian future. It’s especially annoying to smart thinking people, who value reason and human intelligence as uniquely valuable assets. Tech salesmen such as Sam Altman promise an AI dominated future where machines out-think humans across the spectrum of arts, sciences and (gasp) humanities. To the guardians of humanity’s unique capabilities, these salesmen are more than annoying. They’re dangerous. Devaluing humanity by declaring AI superior! Over-promising to investors?! The horror.
What ever to do with such overpromising? May I suggest: just ignore it? I don’t care what Sam Altman says. He’s a salesman. Objecting to him is like objecting to the sign, “World’s Best Cup of Coffee.” We all know it’s not.
I suspect there’s another reason we don’t want to admit AI is transformative (and yes I count myself here). AI creates a justified fear of a less-human future. Perhaps I’m not helping, as I breezily describe a generation of students who can’t write. I admit some of it can appear rather bleak.
But bleakness is an outlook, not a prediction. An AI-based future will be both good and bad; and I don’t think we can foresee how the good and bad will be distributed. Things evolve in ways we might not expect.
For example: If the internet is choked with AI-generated slop, maybe we’ll scroll less and go outside for a walk more often. Maybe AI can order those 20 sandwiches for the lunch meeting, so that the office manager can take some time to write poetry or sing a few songs. We might someday come to view LLMs the way a farmer looks at his pick-up truck: as a natural offshoot of work; as an obvious tool for getting things done; as just a machine and yet inspiring some trace of human feeling… loyalty, respect, camaraderie.
The future of work is uncertain. We don’t know the form it will take. We just know LLMs will impact it dramatically. It’s best to admit this and pivot toward making AI more human-centric, more ethical, better aligned and less damaging. That’s a transformative step indeed… not for the machines, but for ourselves.






This piece captures something important: transformative doesn’t always mean glamorous. LLMs are already reshaping work by compressing the drudgery of communication and paperwork into something faster and more tolerable. The bigger question is whether we maintain semantic fidelity in the process, whether meaning survives the compression loop intact. That’s where the real stakes for human work and culture lie.