I think you're misunderstanding the paradigm shift completely -- AI does not just generate code N(x) more quickly. It thinks N(x) faster, it researches N(x) faster, it tests N(x) faster. There are hundreds of tasks that you'll find engineers are offloading to AI every day. The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows. This is why tool-use and agentic workflows have taken engineering by storm.
Debugging, sanity checking, testing, etc. are the best uses of LLMs. Much better than writing code.
Developers should write their own code and use LLMs to design and verify. Better, faster architecture and planning, pre-cleaned PRs and no skill atrophy or loss of understanding on the part of the developer.
Funny, I have the complete opposite impression after using claude code for a while. I would never trust it to design anything. Never again. But it can code pretty well given a very tight and limited scope.
To clarify, AI should not do the design itself. You develop the design in conversation with AI.
I come in knowing what I need to build and at least one idea or more of how it should be done. I present the problem, constraints, potential solutions, and ask for criticisms and alternatives. I can keep it as broad as possible or I can get more granular like struct layouts, api endpoints, etc. I go back and forth until there's an approach I prefer and then I code that approach.
| it can code pretty well given a very tight and limited scope.
It's wildly better at tight and limited scope than large scale changes but even then I would rather code it myself.
The article addresses exactly this objection. Most importantly, it quotes that AI coding tools have a detrimental effect on software stability - which is basically raison d'etre for our profession. When it produces more robust software and handles on-call shifts better than humans, I will consider programming done.
Not sure why you are downvoted but I agree. Additionally, perhaps LLMs are just like another higher programming language as the author said, and they still need someone to steer them.
I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.
I think LLMs will be (and already are) useful for many more things than programming anyway.
> I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.
Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.
While I think the author is entirely right about 'natural language programming' in the current day, if LLMs (or some other AI architecture) continue to improve, it is easy to believe touching code could become unnecessary for even large projects. Consider that this is what software co. executives do all the time: outline a high level goal (software product) to their engineering director, who largely handles the details. We just don't yet know if LLMs will ever manage a level of intelligence and independence in open-ended tasks like this. And, to expand on that, I don't know that intelligence is necessarily the bottleneck for this goal. They can clearly tackle even large engineering tasks, but often complaints are that they miss on important architectural context or choose a suboptimal solution. Maybe with better training, context handling, documentation, these things will cease to be problems.
I don't buy that's true. The "only" part, anyway. Look at how UX with software has evolved. This is gonna be an old man yells at clouds take, but before smartphones, there were hotkeys. And man, you could fly with those things. The computers running things weren't as fast as they are today, but you could mash in a a whole sequence thru muscle memory, and just wait for it to complete. Now, you have to poke at your phone, wait for it to respond, poke at it some more. It's really not great for getting fast at it. AI advancement is going to be like that. Directionally generally it will be better, but there's going to be some niche where, y'know what, ChatGPT-4o really had it in a way that 5.5 does not. (Rose colored glasses not included.)
Claude connected to a Postgres (readonly obviously) and Datadog MCP servers in addition to access to the codebase can debug prod issues so quickly. That’s easily a 10x win compared to a senior engineer doing the exact same debugging steps. IMHO that’s where the actual productivity boost is
>> Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!
> (although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)
To be fair, this statement from Brooks doesn't entirely match with the "10x programmer" we talk about. My take on it is when someone says "10x programmer" today, they mean 10x more productive than the average, not 10x more productive than the worst. Brooks' statement is about the latter. If he'd looked at the difference between average and best, I would assume you'd get something more like a 2x or 4x programmer.
I didn't get the sense that the author is nervous. What I tend to see are people who are nervous that going all-in on LLM workflows might not have the payoff they are expecting, and are becoming increasingly fanatical as a result.
Just one more harness bro. Just one more agentic swarm. Please bro, just one more Claude Max subscription. Please bro.
Even without writing code LLMs are a huge help, analyzing code, doing code reviews, documenting code, etc... Even without writing a line of "code" LLM hugely speed up development and take away the annoying/boring work.
I really enjoyed this article, it's well written and does a good job of dismantling the flawed arguments by language model maxis' while presenting a more realistic outlook on where we are now and where we are going.
I think the biggest benefit language models have provided me is in the auxiliary aspects to programming: search, debugging, rubber ducking, planning, refactoring. The actual code generation has been mixed.
I had an LLM try and implement a fairly involved feature the other day, providing it with API spec details, examples from other open source libraries, and plenty of specifications. It's also something readily available in training data as well, but still fairly involved.
On first glance it looked great, and had I not spent the time to investigate deeper I would have missed some glaring deficiencies and omissions that render its implementation worthless. I am now going back and writing it by hand, but with language models providing assistance along the way, and it's going much better.
I think people are being unrealistic by thinking that the usage of language models in their side projects represent something broader. It's almost the perfect situation for language models: small, greenfield code bases, no review, no responsibility, and no users. It goes up on GitHub with a pretty readme, and then off to social media where they post about how developers are "cooked". It's just not a very realistic test.
In the end we will probably see large productivity increases by integrating language models, but they won't be replacing developers but rather augmenting them.
I honestly couldn't force myself to finish yet another blog post about how "we're not yet sure what impact LLMs will have on society" or whatever beleaguered point the author was attempting to make.
"Some random person's take on LLMs" was maybe interesting in 2024. Today it is not even remotely interesting.
There are a gazillion more interesting things happening today that ought to be of interest to the median HN reader. Can we talk about those instead?
I'm confused. If you don't want to talk about LLMs then why didn't you just flag the post and move on? Submit something interesting, upvote and comment on interesting posts, instead of feeding the engagement on this thread.
It sounds like you actually do want to talk about how much you don't want other people to talk about LLMs.
I am an AI engineer and I honestly agree. Talking about LLMs feels like the new crypto, with some nuances (i.e. many innovative things being possible and done with LLMs whereas crypto innovations were… few and far between).
it’s felt like the new crypto to me for about 2-3 years now.
i was doing an ML Sec phd a year or two before all this hype took off. i took one of the OG transformer papers along to present at our official little phd reading group when the paper was only a few months old (the details of this might be a bit sketchy here, was years ago now).
now i want nothing to do with the field in any way shape or form. i’m just done.
the problem with this article is that he is right of course, but only right now. There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight, yes we likely will always need a few experts
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight
There are, IMHO, fewer reasons to believe they will be able to do that rather than not, though.
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight
Really? That's like someone during an economic boom saying "The economy is the worst it'll ever be. There is no reason to expect things to not continue to improve".
I have no stake in Fred Brooks. But No Silver Bullet seemed to be taken as gospel on this board. Sufficiently productivity-enhancing technology? Gimme a break man. Maybe you’ll get a 30% boost. Not a 10X boost.
> If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.
The article goes on to assume there’s no 10x gain to be had but misses one big truth.
Needing to type the code is an enormous source of accidental difficulty (typing speed, typos, whether you can be arsed to put your hands on the keyboard today…) and it is gone thanks to coding agents.
One issue is that most developers and teams are still fairly novice in pushing AI to the limit. You have to learn to step up and become and architect and let AI work for you. Create a perfect modulular system and force the AI to produce great quality code within bonded contexts. All studies that look at performance when using AI are not looking at the best AI developers. They are looking at the average joe.
To make my point clearer: You give a F1 car to a normal driver and then measure performance. It will suck. Wait a year and then measure performance and you will see an extreme increase.
I have been pushing the limits of AI since 2022 and I have always had the goal of producing perfect quality and architecture with AI and would not settle for less. Its definetly doable, working fast and producing great quality.
Developers should write their own code and use LLMs to design and verify. Better, faster architecture and planning, pre-cleaned PRs and no skill atrophy or loss of understanding on the part of the developer.
I come in knowing what I need to build and at least one idea or more of how it should be done. I present the problem, constraints, potential solutions, and ask for criticisms and alternatives. I can keep it as broad as possible or I can get more granular like struct layouts, api endpoints, etc. I go back and forth until there's an approach I prefer and then I code that approach.
| it can code pretty well given a very tight and limited scope.
It's wildly better at tight and limited scope than large scale changes but even then I would rather code it myself.
I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.
I think LLMs will be (and already are) useful for many more things than programming anyway.
Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.
I don't buy that's true. The "only" part, anyway. Look at how UX with software has evolved. This is gonna be an old man yells at clouds take, but before smartphones, there were hotkeys. And man, you could fly with those things. The computers running things weren't as fast as they are today, but you could mash in a a whole sequence thru muscle memory, and just wait for it to complete. Now, you have to poke at your phone, wait for it to respond, poke at it some more. It's really not great for getting fast at it. AI advancement is going to be like that. Directionally generally it will be better, but there's going to be some niche where, y'know what, ChatGPT-4o really had it in a way that 5.5 does not. (Rose colored glasses not included.)
> (although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)
To be fair, this statement from Brooks doesn't entirely match with the "10x programmer" we talk about. My take on it is when someone says "10x programmer" today, they mean 10x more productive than the average, not 10x more productive than the worst. Brooks' statement is about the latter. If he'd looked at the difference between average and best, I would assume you'd get something more like a 2x or 4x programmer.
Just one more harness bro. Just one more agentic swarm. Please bro, just one more Claude Max subscription. Please bro.
I think the biggest benefit language models have provided me is in the auxiliary aspects to programming: search, debugging, rubber ducking, planning, refactoring. The actual code generation has been mixed.
I had an LLM try and implement a fairly involved feature the other day, providing it with API spec details, examples from other open source libraries, and plenty of specifications. It's also something readily available in training data as well, but still fairly involved.
On first glance it looked great, and had I not spent the time to investigate deeper I would have missed some glaring deficiencies and omissions that render its implementation worthless. I am now going back and writing it by hand, but with language models providing assistance along the way, and it's going much better.
I think people are being unrealistic by thinking that the usage of language models in their side projects represent something broader. It's almost the perfect situation for language models: small, greenfield code bases, no review, no responsibility, and no users. It goes up on GitHub with a pretty readme, and then off to social media where they post about how developers are "cooked". It's just not a very realistic test.
In the end we will probably see large productivity increases by integrating language models, but they won't be replacing developers but rather augmenting them.
I honestly couldn't force myself to finish yet another blog post about how "we're not yet sure what impact LLMs will have on society" or whatever beleaguered point the author was attempting to make.
"Some random person's take on LLMs" was maybe interesting in 2024. Today it is not even remotely interesting.
There are a gazillion more interesting things happening today that ought to be of interest to the median HN reader. Can we talk about those instead?
It sounds like you actually do want to talk about how much you don't want other people to talk about LLMs.
i was doing an ML Sec phd a year or two before all this hype took off. i took one of the OG transformer papers along to present at our official little phd reading group when the paper was only a few months old (the details of this might be a bit sketchy here, was years ago now).
now i want nothing to do with the field in any way shape or form. i’m just done.
I'm reminded of this scene from the Matrix: https://www.youtube.com/watch?v=cD4nhYR-VRA where the older wise man discusses societies reliance on AI
"Nobody cares how it works, as long as it works"
We're done. I for one welcome our new AI Overlords, or more accurately still welcome the tech bro billionares who are pulling the strings
There are, IMHO, fewer reasons to believe they will be able to do that rather than not, though.
Really? That's like someone during an economic boom saying "The economy is the worst it'll ever be. There is no reason to expect things to not continue to improve".
Until recently. dramatic pause
And then AI happened.
The article goes on to assume there’s no 10x gain to be had but misses one big truth.
Needing to type the code is an enormous source of accidental difficulty (typing speed, typos, whether you can be arsed to put your hands on the keyboard today…) and it is gone thanks to coding agents.
To make my point clearer: You give a F1 car to a normal driver and then measure performance. It will suck. Wait a year and then measure performance and you will see an extreme increase.
I have been pushing the limits of AI since 2022 and I have always had the goal of producing perfect quality and architecture with AI and would not settle for less. Its definetly doable, working fast and producing great quality.
Everyone will end up there eventually.