Multiple times per week I have the same conversation. It goes something like this:
- AI will make developers irrelevant
- Why?
- Because LLMs can write code
- Do you know what I do for a living?
- Yes, write code?
- Yes, about 2-5% of the time. Less now.
- But you said you are a developer?
- I did
- So what do you do 95-98% of the time?
- I understand things and then apply my ability to formulate solutions
- But I can do that!
- So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.
Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem
- Waiting for the build system and tests to run
- Manually testing the app to make sure it behaves as I'd like
- Reviewing the diff to make sure it's clear
- Uploading the PR and writing a description
- Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.
The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.
> Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.
Honestly, AI doesn't feel like it's affecting hiring needs from the trenches. We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.
Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.
But bosses are happy to say it's AI because that makes you sound in control.
Software engineering today is almost nothing like the role it was 30 years ago.
Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.
The days of 'lifetime career' had already gone for most people, way before AI arrived.
I'm repeating what others have essentially said, but ask yourself what's on your resume. If it says "Software Engineer" and that's all it talks about, then yea you might not find it's a lifetime career.
But if it's a diversity of things (that include software development) then you probably have a lifetime career ahead of you.
I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.
I keep reading about how AI will be fine because people can just retrain for different careers. However, I never read what those careers are or who is going to pay for retraining.
I certainly don't have the money or time to go back to college and start a new career at the bottom.
The argument is that “that’s what always happened in the past”.
Which is true, but it’s true as long as it’s not true.
The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.
But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.
Totally agree, and would add another way “that’s what always happened in the past” is a terribly weak argument. Things might have always worked out at the societal level so far, but very often do not at the individual level. Countless successful craftsmen have had their livelihoods ruined by technological changes and spent their remaining years impoverished. How many people funding AI would be willing to throw their own life away for the good of some future strangers that may or may not be born? I'm pretty sure the answer is <=0.
> Which is true, but it’s true as long as it’s not true.
It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.
In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.
The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.
I'm also pretty sure in the past industrial transitions, many of the people who lost their jobs at the start of the change never found better ones. It took a generation or so for new opportunities to really be found and fine tuned and you're competing for those new roles with younger people anyway.
If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?
Did the younger people find better jobs? You used to have all these jobs for people who were maybe a bit smarter than average with good judgment. In the 1990s, the local community college used to advertise associates degrees for paralegals. That's a job that doesn't exist in the same way anymore thanks to computers. Now it's become an internship for kids with top credentials before they go to law school. Which is fine for them, but what about everyone else?
It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
It’s also not that true, and highly dependent on a lot of factors.
Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”
> The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.
> However, I never read what those careers are or who is going to pay for retraining.
There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.
It's not going to happen, just as it didn't happen for skilled industrial workers whose jobs got outsourced to China. The government will pay just enough in welfare to keep the situation manageable. Then they'll demonize you in the culture, as a Luddite, etc.
I think the idea of being an employee is fundamentally changing. Not saying its good or bad but it's shifting to a more entrepreneurial phase where people have to step out of their 9 to 5s and find ways to deliver value that others want to pay for.
We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.
The second part seems obvious to me: the ones who are getting retrained. If it's some kind of formal education, depending where you are, maybe the state at least for part of it.
Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.
This is the story that's been written since the Luddite revolts, as far as I know. The successors in that case were the capitalists who spent a significant amount of time and money convincing the constabulary and political figures to side with them. People were shot and jailed in the worst cases. The best case, workers were left without work or sent off to work-houses where they became indentured servants to the state.
The last work-house closed in the 1930s.
That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.
Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.
But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.
I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.
That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.
Also, it's not necessarily true that there will be other great careers available. This seems to just be an assumption people are making.
Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.
Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.
Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.
It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.
A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.
The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.
The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.
Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.
At least in the US, the only major non-AI growth field seems to be healthcare to deal with the swell of baby boomers living longer than people have before.
But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.
If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live. That's even more the case if you don't fit into the white, cis, straight, and Christian box that rural areas are willing to accept.
This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
There will be more software in the same way there is more agricultural output today.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.
People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?
Software engineering has never been a lifetime career. When I got in to the field in ~2000, grizzled old veterans who had been in it since the 70s said "Get ready to retrain every 5 years." Sure enough, I've retrained and switched specialties every ~5 years, from Java Swing development to web development to Google's proprietary stack to mobile development to data engineering to cryptocurrency back to Google's stack and then to engineering management.
The thing is, software engineering has never been about the code or the programming language. It's always about knowing how to solve digital problems. There's a reason why the programming language used in my university CS courses was "math", and they religiously avoided teaching concrete languages or frameworks or technologies that were popular at the time. If you understand the patterns of how to recognize and solve problems, and how to apply existing research toward them, you can implement that in any language.
That still holds in the AI era. I've found that just knowing "Here's the architecture of how the solution should look like" and being able to call out the LLM when it goes off track has made me way more effective at LLM-based coding. I can frame the problem quickly, I can specify the architecture, and I can keep the LLM from spinning its wheels or giving plausible but wrong answers. That's a superpower in a world where your coder might hallucinate.
Argument A: AI means you don't learn as much, so even though you are more effective, it inhibits your growth and you shouldn't use it. However, on a pragmatic level, it's effective to hire a bajillion people, fire them at will, and get AI to do everything. You will get so many JIRA tickets closed and so many lines of code written.
Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.
Institutional and personal knowledge seem similar, but the implications of each are radically different.
Unless I'm missing something, there's an obvious logic issue here.
If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.
Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.
I don't think its just, there's also the fact that if you're working with c or cpp or any systems level language, you typically do know how to read assembly because you've stumbled upon it for some reason, and if you're writing low level programs (which is typically what these languages are used for) you will definitely at some point need to know to read assembly and maybe even write some. But with LLMs the entire field has shifted. You don't need to know anything to write any language and you don't even need to have high level computer science knowledge nowadays to get something that works and the world increasingly just seems to want something that works.
Was it ever a lifetime career? Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers. Ageism is real in this industry. You either save up enough money to retire early, switch into management, or get forced out of the industry eventually. AI is just accelerating the trend. I see very few junior engineers resisting AI. I see a LOT of staff+ engineers resisting it. Just look at the comments on HN. Anti-AI sentiment is real.
If you are lucky and got in early, then probably yes, it could be a lifetime career. It's like all careers, when you joined early, you got a lot of opportunities, you also rode the wave, you eventually rose to the top if you grit through.
It's a lot easier to be early than to be smart or quick.
In 1996? Software development was the hot ticket to upper middle class in the early 80s when I was a recent CS grad, and I was already working with people who were in it for the money. By the late 90s, if you could spell “HTML”, you were making decent money as a web developer. This all came crashing down during the Dot Bomb collapse, but SW has been pretty popular for most of my career, and it just continued to get more popular, especially as salaries got more I timing.
In retrospect the Dot Bomb was a bump in the road. Yes, some people who only knew enough HTML to be a "Webmaster" might have been filtered out, but pretty quickly anyone who could really build software had opportunities greater than before.
The differentiator is augmenting reasoning with AI versus replacing reasoning with AI. But those who choose to replace their reasoning with AI probably weren't good at it to begin with; cause if they were, they'd choose to not replace it. Exception is that AI can actually replace reasoning (which it can't, yet) - then it's game over with a career in software engineering anyway.
Seems the solution here is the same it's actually always been if you want career progression: be more than just a code jockey. The true value of an engineer is to be plugged into overall roadmaps, broader thinking around product, how to achieve company goals, etc etc.
Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.
My exprience has been that companies actively work to prevent people from becoming more than just code jockeys. For example, most of the places I've worked have viewed code delivered as the ONLY metric used to evaluate performance. Attempts to contribute to roadmaps or strategy are ignored at best and punished at worst.
80% of my day to day job has never been pumping out lots of code. it is a complicated career is it? we do a lot of alignment, design and thinking. i can't even agree the idea of outsourcing thinking, i think AI is very good at helping us to think clearly, but it doesn't really "think" for us.
If AI becomes good enough to easily produce maintainable and high quality software, then I really can't see how demand for software engineers would not plummet. Even lots of non-coding work that software engineers do, such as accurately capturing what client actually wants, will become much less valuable - e.g. currently misunderstanding of client's requirements is catastrophic and can lead to waste of months of labour; with AI it could become matter of max few hours lost. So I can understand argument that software engineering careers might be safe because AI may plateau and we might never reach level where it's actually capable of producing good software. But I absolutely don't buy that software engineering will be safe even if such AI exists. Even if your current work is just 20% actually coding, you must remember about second order effects that will take place once quality code generation is 1000 times faster.
AI can also do alignment and pull from its vast training dataset for design and "thinking" -- because 99% of the problems in this world were already solved, multiple times, maybe not in the exactly same format, but in a very similar format.
I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".
I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.
Is anything today a lifetime career? I’ve had at least five or six job descriptions over my time, and at least a few of them pretty much don’t exist anymore, or are changed beyond recognition.
Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.
Can we stop upvoting vibe published articles?
The arguments are flawed and don't even make sense to anyone who does software
Absolutely untrue, you could have a solid career writing back office or internal software in financial services, insurance, any number of industries. Would they make you a millionaire? No. But they'd pay for a nice house in the suburbs and raising a family.
On the contrary, in an efficient economy, every business operations manager (MBA) would be a skilled software engineer, able to comfortably manage data flows and design custom automated processes. There's so much potential energy there in unlocking this technical literacy.
Less "pure" programming, but lots more programming in general.
> I hope that this isn’t true. It would be really unfortunate for software engineers. But it would be even more unfortunate if it were true and we refused to acknowledge it.
More AI Soothsaying. Not so hard on the Inevitabilism this time.
> Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”
I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.
In this specific case we do have a techniques to build software without causing damage, so why change that?
This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.
Was it ever? It's always seemed weird to me that people even think 'software engineering' is a career.
It's a tool for knowledge work.
No carpenter is a specialist in drills.
It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.
>
Software engineers didn’t just disappear after age 40.
At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).
So,
> Software engineers didn’t just disappear after age 40.
> At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game"
That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.
> Software engineers didn’t just disappear after age 40.
>> is rather a very recent phenomenon.
Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.
Software is wood, not drills, and if we somehow invented bacteria that gradually built an ugly but saleable house when fed on water and nutrients and nudged into shape, I bet carpenters (well, framers or whatever they're called in the US) would have an identity crisis too.
I kind of disagree. You are describing a kind person who is extremely valuable, a person who is proficient in SWE but also has domain specific skills in some niche.
That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.
IMO AI is the first thing that has ever actually challenged that.
Id disagree with this analogy:
"No carpenter is a specialist in drills." and i think its an interesting lens through which to look at the evolution of our tools.
I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.
IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.
Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.
imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.
I tell my boys (both in HS now), the combination of a specialized skill/knowledge + competent computer programming is the sweet spot. For example, my oldest wants to go into Petroleum Engineering which is great but I told him to still learn software development and get comfortable solving problems with code. Having specialized Petroleum Engineering knowledge combined with being a competent software developer is a powerful combination.
I think the logical next step is that "XYZ knowledge worker" will become a software engineer of sorts. Not literally writing code, but at minimum encoding processes/workflows into some language.
If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.
And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).
There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.
Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.
Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.
> The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties, at which point your body just can’t keep up with it.
If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?
Are people seriously thinking that you can make yourself dumber by using a chat UI?
If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.
You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.
Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?
I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.
The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.
I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.
Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.
If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.
terribly written article that failed to make any point. anyone whise read ai generated code from the best models and who understand how llms work, knows this statement is complete bs.
The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.
Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.
Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour.
We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative
Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
> The AI is coming for that too.
In that case all [1] non manual work is doomed, until robotics has an LLM moment.
[1] With the exception of all fields protected by politics or nepotism.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.
The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.
My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.
Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.
But bosses are happy to say it's AI because that makes you sound in control.
Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.
The days of 'lifetime career' had already gone for most people, way before AI arrived.
But if it's a diversity of things (that include software development) then you probably have a lifetime career ahead of you.
I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.
I certainly don't have the money or time to go back to college and start a new career at the bottom.
Which is true, but it’s true as long as it’s not true.
The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.
But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.
It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.
In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.
The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.
If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?
It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”
Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.
There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.
We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.
Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.
Like a politician who's asked about this in a town hall, but thinks that "our plan is to do absolutely nothing" doesn't sound very appealing.
The last work-house closed in the 1930s.
That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.
Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.
But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.
I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.
That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.
Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.
Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.
Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.
It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.
A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.
The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.
The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.
Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.
But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
(Also why I think Google and Meta might not make it to the end of the AI 'revolution'...)
You might need to relocate to a place with much lower costs of living.
This was the idea behind remote working discussed during COVID-19 times:
- the company can pay less money because the employee is living at a much cheaper place than the expensive city where the company is located
- on the other hand, even with a smaller salary, the employee has more money at the end of the month because of the smaller costs of living
So both sides win.
This will change for the better if more and more educated people relocate there.
s/creating software/typing correspondence/
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
The thing is, software engineering has never been about the code or the programming language. It's always about knowing how to solve digital problems. There's a reason why the programming language used in my university CS courses was "math", and they religiously avoided teaching concrete languages or frameworks or technologies that were popular at the time. If you understand the patterns of how to recognize and solve problems, and how to apply existing research toward them, you can implement that in any language.
That still holds in the AI era. I've found that just knowing "Here's the architecture of how the solution should look like" and being able to call out the LLM when it goes off track has made me way more effective at LLM-based coding. I can frame the problem quickly, I can specify the architecture, and I can keep the LLM from spinning its wheels or giving plausible but wrong answers. That's a superpower in a world where your coder might hallucinate.
Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.
Institutional and personal knowledge seem similar, but the implications of each are radically different.
If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.
Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.
LLM code generation: "Here is an intent/specification. Invent code that hopefully satisfies it."
Does the compiler analogy provide value under those terms? I don't think it does. In fact, I think it provides negative value.
We don't need to use tortured analogies to express excitement over these tools.
It's a lot easier to be early than to be smart or quick.
I'm not discounting ageism in the industry, but how popular of a career was it 30+ years ago compared to now?
Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.
if you do that then... likely very replacable.
I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".
I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.
Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.
Can we stop upvoting vibe published articles? The arguments are flawed and don't even make sense to anyone who does software
Because people want to discuss about the topic of the headline.
Less "pure" programming, but lots more programming in general.
More AI Soothsaying. Not so hard on the Inevitabilism this time.
https://news.ycombinator.com/item?id=47362178
I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.
In this specific case we do have a techniques to build software without causing damage, so why change that?
This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.
It's a tool for knowledge work.
No carpenter is a specialist in drills.
It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.
I’m kind of confused how you might think it wasn’t. Going through a career as a software dev until retirement was very common.
Software engineers didn’t just disappear after age 40.
At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).
So,
> Software engineers didn’t just disappear after age 40.
is rather a very recent phenomenon.
That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.
> Software engineers didn’t just disappear after age 40.
>> is rather a very recent phenomenon.
Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.
That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.
IMO AI is the first thing that has ever actually challenged that.
I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.
IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.
Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.
imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.
If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.
And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).
There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.
Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.
Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.
If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?
If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.
Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.
Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.
We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative
This is sarcasm, but it's probably also going to get sold as a feature at some point.