The "calculator ruined the world" argument was actually studied to death once the panic subsided. Large meta-analyses of 50 years of data show it was mostly a non-problem. Students using calculators generally developed better attitudes toward math and attempted more complex problems because the mechanical drudgery was gone.
The only real "catch" researchers found was timing. If you introduce them before a kid has "automaticity" (around 4th grade), they never develop a baseline number sense, which makes high-level math harder later on.
It's a pretty clean parallel for LLMs. The tool isn't the problem, but using it to bypass the "becoming" phase of a skill usually backfires. If you use an LLM before you know how to structure an argument or a block of code, you're just building on sand.
I think the author just doesn't know how to use LLMs well.
"Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained."
From my experience, LLMs don't cause this effect. You still get to explore a ton of dead ends and whatnot, just on a much higher level.
"You get the answer but nothing else (keep in mind we are assuming that it's a good answer)."
On the contrary here - you get to answer a ton of followup questions easily, something you don't get to ask books.
"I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field."
LLMs are at a level of junior-mid of any field (and going higher every year), not senior-master. Is that anything new? Their strength is, among other things, in making connections between fields, and also the availability.
If you have an option to talk to a specialist in your field that has time 24/7 to discuss ideas with you, that's great, but also highly unusual. If you don't have such a person, an LLM that is junior-mid is way better than plain books.
For myself, I’m very much a “results” guy. Have been, for all my career. I’ve been shipping (as opposed to “writing”) software for most of my adult life. People seem to like to stuff I make.
I’m currently working on my first major project that incorporates heavy LLM contributions. It’s coming along great.
I started with Machine Code and individual gate ICs, so my knowledge goes way down past the roots.
I don’t miss it, at all. Occasionally, my understanding of stuff has been helped by that depth of experience, but, for the most part, it’s been irrelevant. It’s a first-stage booster, dropping back into the atmosphere.
I will say that my original training as a bench tech has been very useful, as I’m good at finding and fixing bugs, but a lot of my experience is in the rear-view mirror.
I have been routinely googling even the most basic stuff, for many years. It hasn’t corroded my intellect (yet), and I’m doing the same kind of thing with an LLM.
Not being sneered at by some insecure kid is nice.
I'm somewhere in between. I'm excited about building more things faster and extending my capabilities. But I also love thinking about the underlying language, runtime, algorithms, the wider system. I want LLMs to enhance this for me, I want my understanding to go up as I write less code. It's also key to my job as a lead that I maintain understanding of the system for debugging, security etc.
So if I can do both with these tools, then great. I want to cognitively offload in a way that allows me to focus on the important bits. And I'm writing instructions to the LLM to help me do that eg 'help teach me this bit'. A builder and tutor at once.
I usually use ChatGPT, as a chat (as opposed to an agent).
It explains everything quite well (if sometimes a bit verbose).
Today, I am going to do an experiment: I’ll be asking it to rewrite the headerdocs in one of my files (I tend to have about a 50% comment/code ratio), so it generates effective docc documentation. I suspect the result will be good.
Would you be aware of it if that was the case? I don’t mean this to be hostile or anything but the senario in which one does not notice himself and it goes unnoticed or silently accepted externally does not seem too far fetched to me.
Probably, yes. If we don’t have a clear understanding of the fundamentals, it can make life difficult. My personal experience has been “layers,” with the new layer building upon, and often subsuming, the substrate.
But it does mean that there’s limits. A lot of folks start at points higher than mine, and can go much further than me.
That’s fine; as long as I understand and accept my limitations, as well as my strengths.
I have found that having a humble, generous, and non-caustic approach to other people, has been very good for my personal mental health (and career).
I have spent my entire career, being the dumbest guy in the room, and I’m not exactly a dunce. It can sometimes be quite humbling, but I’ve had great opportunities to learn.
People will often be willing to go out of their way to help you understand, if you treat them with respect; even if they are being jerks.
Life’s too short, to be spending in constant battle.
I understand the author’s sentiment but I would like to give a counter example:
I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.
I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.
Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.
All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.
> I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.
I don't mean to be judgemental. It's possible this is a personal observation, but I do wonder if it's not universal. I find that if I give an inch to these models thinking, I instantly become lazy. It doesn't really matter if they produce interesting output, but rather that I stop trying to produce interesting thoughts because I can't help wonder if the LLM wouldn't have output the same thing. I become TOO output focused. I mistake reading an interpretation for actually integrating knowledge into my own thinking, I disregard following along with the author.
I love reading philosophy as well. Dialectic of Enlightenment profoundy shaped how I view the world, but there was not a single part of that book that I could have given you a coherent interpretation of as a read it. The interpretations all come now, years after I read it. I can't help but wonder if those interpretations would have been different, had my subcouncious been satiated by cheap explanations from the lie robot.
I mean it can also depend on scale. I use hundreds of sub-agent instances to do analysis that I just would not be able to do in a reasonable timeframe. That is a TON of thinking done for me.
For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approach
For other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.
It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.
While "I don't have to think, I just get the LLM to do the task" is a bit careless (or a "hype" way of putting it)... I'd reckon it's always been true that you want to think about the stuff that matters and the other stuff to be done for minimal effort.
e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.
Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.
What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.
I think it could be. It doesn't have to be one or the other.
In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.
Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.
No, you outsource it because it's not your core competency. I think humans should be able to do anything and not narrowly specialise as narrow specialisation leads to tunnel vision. Sometimes you need to outsource to someone because of legal reasons (and rightly so, mostly because the complexities involved do require someone who is a professional in that area). Can some things be simplified? Of course they can, and there are many barriers that prevent such simplification. But it's absolutely insane to say - nah, we don't need to think at all, and something else can do all the work.
I google a lot (or rather, Kagi). I loved to explore the web when I was younger. But over time I lost any interest in trying to gather informational bits from increasingly shittier websites designed to have more ads and hide relevant information for as many ad slots as possible. These days I hit the quick answer button inside Kagi more often and just accept that I might have some false information in there. If it is critical to be right, I usually consult primary sources directly anyway.
I see the SDE pushback on LLMs, but most of it is unfounded. Like any new tool, if used irresponsibly of course bad things can happen. Most of the backlash from devs seems rooted in:
1. It causes a step change in productivity for those that use it well, and as a result a step change in the expectations on productivity for dev teams. Folks simply expect things to be done faster now and that’s annoying folks that have to do the building.
2. It’s removed much of the mystique of dev. When the CEO is vibe coding legit apps on their own suddenly the SDE team is no longer this mysterious oracle that one can’t challenge or question because
nobody else can do what they do. Now everyone can do what they do. Not to the same degree, yes, but it’s completely changed the dynamic and that’s just annoying some devs.
SDEs aren’t going away, but we will likely need less moving forward and the expectations on how long things take have changed forever. Like anything in tech we’re not going back to the old way so you either evolve or you get cycled out. Honestly, some devs right now look like switchboard operators yelling at the ability of people to self-dial a telephone. Did they do it better? Yeah… but the switchboard isn’t coming back.
> “I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer but nothing else (keep in mind we are assuming that it's a good answer). You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it.
All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.
If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!
It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
> you had to build a model of the world just to survive the tension?
The world the author is describing currently has LLM in it. Irrespective of the author liking it or not, it is here to stay. So to build a model of the world, you would still need to consult an LLM, understand how it can give plausible looking answers, learn how to effectively leverage the tool and make it part of your toolkit. It does not mean you stop reading manuals, books or blogs. It just means you include LLM also in those list of things.
I don't think the argument is correct. Reasoning LLM will check itself and search multiple sources. It's essentially doing the same mental process as human would. Also consulting multiple LLMs completely breaks this argument.
IME, even when an LLM is right, a few follow-up questions always lead to some baffling cracks in its reasoning that expose it has absolutely no idea what it's talking about. Not just about the subject but basic common sense. I definitely wouldn't call it the "same mental process" a human does. It is an alien intelligence, and exposing a human mind to it won't necessarily lead to the same (or better) outcome as learning from other humans would.
Author’s central point is that an LLM answer “is optimized for arrival, not for becoming” (to paraphrase from the Google “Lucky” part).
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
Agreed, all this "but if you don't need certain skills any more, you'll lose them!" is tiring, and even more tiring because it's missing the entire point: yes, because I don't need them any more!
It feels like I'm reading an article crying "if you buy a car, you will lose your horse-shoeing skills!" every day lately.
If people used LLMs more we would have fewer instances of misinformation. Lots of comments in social media could easily be dispelled by a single LLM search.
Bad journalists are biased. Good journalists will present a story as factually as possible and as void of bias as possible (of course it's impossible to not have any biases). Opinion pieces can have as much bias as they like as long as they're strictly marked as opinions.
That's not true. Any journalist would tell you that picking the stories you chose to cover is just as much a bias as how you chose to cover them. Even then, the specific words you pick, how you ask the interviewees, how you place the story on the page, what you pick as the "related stories". All of that is Editorial and reflects an opinion.
Good journalists are open about their angle. Bad journalists tell you they are "unbiased" and "just bringing you the facts".
Or it could give out bad information and make everything worse because a subset of people seem to think LLMs are infallible or gods rather than aggregates of the knowledge they’ve consumed.
Do you really think e.g Opus 4.6 is less reliable than the average facebook/x post (which is where most people get their news from today)?
People already just blindly believe whatever is put in front of them by the algorithm gods. Even the "@Grok is this real" spam on every X post is an improvement.
Even a small % of incorrectness quickly produces compounding effects, if you view LLMs as an information source. True or false statements are made with equal confidence, because the LLM can’t distinguish true from false.
Yawn. Just a post++ about white-horse attitudes regarding "muh expertise". And yet the top of the top experts in their fields (Terrence Tao, Karpathy, hell even Linus) are finding ways to make them useful for them. That's the crux imo. If you can't find a way to make these tools useful for you, you are the problem. Not the LLMs. THere's something there, even if currently not much, but there's something there for everyone at this point.
This tech isn't going away anytime soon. It might become prohibitively expensive for individuals but it's here to stay. It's worth trying to find a use for it while it's cheap.
The only real "catch" researchers found was timing. If you introduce them before a kid has "automaticity" (around 4th grade), they never develop a baseline number sense, which makes high-level math harder later on.
It's a pretty clean parallel for LLMs. The tool isn't the problem, but using it to bypass the "becoming" phase of a skill usually backfires. If you use an LLM before you know how to structure an argument or a block of code, you're just building on sand.
"Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained."
From my experience, LLMs don't cause this effect. You still get to explore a ton of dead ends and whatnot, just on a much higher level.
"You get the answer but nothing else (keep in mind we are assuming that it's a good answer)."
On the contrary here - you get to answer a ton of followup questions easily, something you don't get to ask books.
"I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field."
LLMs are at a level of junior-mid of any field (and going higher every year), not senior-master. Is that anything new? Their strength is, among other things, in making connections between fields, and also the availability.
If you have an option to talk to a specialist in your field that has time 24/7 to discuss ideas with you, that's great, but also highly unusual. If you don't have such a person, an LLM that is junior-mid is way better than plain books.
I’m currently working on my first major project that incorporates heavy LLM contributions. It’s coming along great.
I started with Machine Code and individual gate ICs, so my knowledge goes way down past the roots.
I don’t miss it, at all. Occasionally, my understanding of stuff has been helped by that depth of experience, but, for the most part, it’s been irrelevant. It’s a first-stage booster, dropping back into the atmosphere.
I will say that my original training as a bench tech has been very useful, as I’m good at finding and fixing bugs, but a lot of my experience is in the rear-view mirror.
I have been routinely googling even the most basic stuff, for many years. It hasn’t corroded my intellect (yet), and I’m doing the same kind of thing with an LLM.
Not being sneered at by some insecure kid is nice.
So if I can do both with these tools, then great. I want to cognitively offload in a way that allows me to focus on the important bits. And I'm writing instructions to the LLM to help me do that eg 'help teach me this bit'. A builder and tutor at once.
I usually use ChatGPT, as a chat (as opposed to an agent).
It explains everything quite well (if sometimes a bit verbose).
Today, I am going to do an experiment: I’ll be asking it to rewrite the headerdocs in one of my files (I tend to have about a 50% comment/code ratio), so it generates effective docc documentation. I suspect the result will be good.
But it does mean that there’s limits. A lot of folks start at points higher than mine, and can go much further than me.
That’s fine; as long as I understand and accept my limitations, as well as my strengths.
How very adult of you.
I have spent my entire career, being the dumbest guy in the room, and I’m not exactly a dunce. It can sometimes be quite humbling, but I’ve had great opportunities to learn.
People will often be willing to go out of their way to help you understand, if you treat them with respect; even if they are being jerks.
Life’s too short, to be spending in constant battle.
I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.
I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.
Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.
All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.
I don't mean to be judgemental. It's possible this is a personal observation, but I do wonder if it's not universal. I find that if I give an inch to these models thinking, I instantly become lazy. It doesn't really matter if they produce interesting output, but rather that I stop trying to produce interesting thoughts because I can't help wonder if the LLM wouldn't have output the same thing. I become TOO output focused. I mistake reading an interpretation for actually integrating knowledge into my own thinking, I disregard following along with the author.
I love reading philosophy as well. Dialectic of Enlightenment profoundy shaped how I view the world, but there was not a single part of that book that I could have given you a coherent interpretation of as a read it. The interpretations all come now, years after I read it. I can't help but wonder if those interpretations would have been different, had my subcouncious been satiated by cheap explanations from the lie robot.
An analogy would be - if GPS allows you to not worry about which turn to take, you can finally focus on where you want to get.
For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approach
For other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.
It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.
e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.
Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.
What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.
In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.
Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.
Those are all things many people outsource their thinking to other people with.
Just incase you didn’t know you can append ? to any query and get a quick answer straight away
1. It causes a step change in productivity for those that use it well, and as a result a step change in the expectations on productivity for dev teams. Folks simply expect things to be done faster now and that’s annoying folks that have to do the building.
2. It’s removed much of the mystique of dev. When the CEO is vibe coding legit apps on their own suddenly the SDE team is no longer this mysterious oracle that one can’t challenge or question because nobody else can do what they do. Now everyone can do what they do. Not to the same degree, yes, but it’s completely changed the dynamic and that’s just annoying some devs.
SDEs aren’t going away, but we will likely need less moving forward and the expectations on how long things take have changed forever. Like anything in tech we’re not going back to the old way so you either evolve or you get cycled out. Honestly, some devs right now look like switchboard operators yelling at the ability of people to self-dial a telephone. Did they do it better? Yeah… but the switchboard isn’t coming back.
All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.
If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!
It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
The world the author is describing currently has LLM in it. Irrespective of the author liking it or not, it is here to stay. So to build a model of the world, you would still need to consult an LLM, understand how it can give plausible looking answers, learn how to effectively leverage the tool and make it part of your toolkit. It does not mean you stop reading manuals, books or blogs. It just means you include LLM also in those list of things.
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
The downside of the internet is that we get to see people agonizing over their inability to adapt to change.
It feels like I'm reading an article crying "if you buy a car, you will lose your horse-shoeing skills!" every day lately.
Bias is useful and inevitable
Good journalists are open about their angle. Bad journalists tell you they are "unbiased" and "just bringing you the facts".
LLMs are much harder to fact-check because they can make anything up based on their training data and weights without sources.
[0] https://mediabiasfactcheck.com
People already just blindly believe whatever is put in front of them by the algorithm gods. Even the "@Grok is this real" spam on every X post is an improvement.
i.e. Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots.
https://thebulletin.org/2025/03/russian-networks-flood-the-i...
Hmmmmm.