133 comments

  • jdw64 13 hours ago
    The real issue, in my view, is not AI itself.

    The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

    Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.

    What remains is documentation and automation.

    But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

    AI is following the same pattern.

    What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

    The West has seen this before, especially in the case of General Electric.

    GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.

    The same mindset is visible today.

    The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

    Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.

    • vishnugupta 12 hours ago
      > removing people and organizational slack

      You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.

      I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.

      • aleqs 4 hours ago
        The problem is, in the minds of these people 'firing at 100% all the time' generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value. I would have loved to be 100% engaged working on solving real problems in honest ways at some of my past jobs, but alas MBA/marketing leadership, which has taken over much of tech has very little interest in actually building good things and solving real problems in honest ways.
        • buzzerbetrayed 26 minutes ago
          > generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value

          When I read comments like this I can’t help but wonder where people like you work. It’s completely unrepeatable to me. I work with really good people, all the way to the tip, and no try to make money by increasing value for our customers.

          Apple, Google, Walmart, Amazon, Home Depot, Anthropic, Toyota, and a hundred other companies all offer me incredible value for so cheap. Why are people so cynical about a world that offers them unimaginable riches everywhere they look.

          Sure there are bad companies. And if you work at one of those, go get a new job.

          • rapnie 1 minute ago
            The parents are talking about things in-the-large, negative societal trends, while you are talking your anecdotal experience and perhaps survival bias striking it so lucky with your employer. The world offers unimaginable riches, but at what cost really? Who benefits most? Where does it lead? Big picture.
        • WalterBright 4 hours ago
          Profit maximization is a continuous process that has generated our high standard of living.

          P.S. I welcome all attempts to prove me wrong!

          • Yoric 36 minutes ago
            I would argue that profit maximization has had very many effects.

            On the one side, it has succeeded at reducing costs, which has indeed given rich societies unprecedented access to consumer goods.

            On the other, it has outsourced from us both jobs and knowledge, which has resulted in higher unemployment and dissatisfaction, with as consequences the political dominoes we see falling internationally. That and the shoddy US health system (which the rest of the world seems to have decided to follow, for some reason).

            And there is the small fact that we're in the process of optimizing the planet to death, and that not-so-rich countries (as well as formerly-rich ones) have starved to death for this high standard of living.

            So, let's appreciate our standard of living, but not assume that it's necessarily a good thing in the grand scheme of things.

          • spiorf 3 hours ago
            No, the process has impeded even higher standard of living, because it misallocates resources from value generation to value appropriation. It's the extreme short term profit maximization that makes the economy a zero sum game. Otherwise it is not.
            • WalterBright 3 hours ago
              Do you really think the economy is zero sum?

              For example, who was Elon Musk's wealth transferred from?

              What do you think about all that money being invested into AI? Is that "extreme short term profit maximization"?

              • cowmix 2 hours ago
                I don’t know who his wealth was transferred from, exactly. But I do know what he’s using it for now: as a gravitational force to unilaterally screw with public institutions and systems the rest of us depend on.

                Even if you agree with some of DOGE projects’s goals, the way it operated was wildly thoughtless about consequences beyond Musk’s personal wishes, and almost completely unaccountable.

                I’m honestly sick that my personal Model Y purchase helped add to that power.

                And I say that as someone who was a huge Musk fan for years, despite the warning signs — the Thai “pedo” comments, and his very public turn during COVID.

                • WalterBright 2 hours ago
                  Musk's money came from creating a business that was worth a lot of money. I.e. he created his wealth.

                  It was not transferred to him.

                  The same goes for Bezos, Jobs, Gates, etc.

                  • jfengel 22 minutes ago
                    He ended up with all of the profits, but he didn't put in all of the work. A lot of people worked very hard for that wealth, either for salary or a minuscule fraction of the profits.

                    He gets to keep the lion's share of the profits because he was the one who took the risk with his capital. And bully for him; well done that man. But we treat him as if he had all of the ideas and wrote all of the code, and that's simply not true. It's a myth that the wealth owners tell us.

                    All of these products would exist with the CEOs who get the credit. If it weren't them it would have been someone else. Their expertise as CEOs is not to get the product done, but to get it done a week earlier than the other guy. And even that much is more luck than skill.

                    There's no way to say "Musk took this money from someone else". But that's different from "He has a million times as much wealth because he is a million times more valuable."

                  • sghiassy 1 hour ago
                    I won’t deny he created his own fortune from PayPal.

                    But if you come from “so much money we couldn't even close our safe” as his father put (quote from Wikipedia

                    Then you do have a leg up

                  • cowmix 1 hour ago
                    The “created vs transferred” thing is more nuanced than that.

                    At some point, accumulated wealth becomes power, and that power can be used to pull energy, attention, labor, and public resources out of the system for one person’s agenda.

                    And in Tesla’s case, the stock value creation story is insanely unorthodox, to be charitable. A lot of that valuation was supercharged by years of market-moving hype done personally by Elon: 10+ years of FSD timelines that never happened, the more recent “buy a Tesla, it’s an appreciating asset!” super-lie, the mission gradually being abandoned, GOP craps on EVs and it’s crickets from Elon, etc.

                    Now we have the ultimate example of wealth gravity distortion: Musk helped put Trump back in power, and the relationship looks openly transactional. But Elon is not just benefiting from Trump’s transactional nature. He is also benefiting from an administration where white collar crime and regulatory accountability seem to have basically stopped being real things.

                    So with all that, the kind of shady behavior Elon pulled that might normally trigger government scrutiny or enforcement is now being smothered by political influence.

                    Even today, Tesla appears to be the highest P/E outlier in the S&P 500 among profitable companies. So the market is not just valuing the current business. It is valuing the story, the hype, and increasingly Musk’s ability to buy influence to stay outta trouble.

                    And to be clear, I’m not trying to pretend nothing real was built. SpaceX is impressive. Tesla really did help kickstart the EV market. But that does not make up for the harm, distortion, and unaccountable power being exercised now.

                    So yes, maybe the wealth was “created” in an accounting sense. But the concern is what happens when that created value becomes an unaccountable force acting on everything else.

                  • kmijyiyxfbklao 2 hours ago
                    You can always trick investors. For example all the overpromising Musk has done over the years. Also when you are that famous you can sell lower quality goods for a higher price that people will buy because they are associated to you.
                    • WalterBright 1 hour ago
                      All true. That's why there are laws against fraud.

                      What do you think of the tricks that California pulled to have billions vanish with not a mile of track laid? Or all those hospices with no beds? Or this fun one:

                      https://www.seattletimes.com/seattle-news/politics/does-noth...

                      Unfortunately, as a taxpayer, I am on the hook for those tricks. With a business, I can do some due diligence and then decide if I want to get in or not.

                      I suspect you and I are being fleeced far more by government waste and fraud than by businesses.

                      • lossyalgo 1 hour ago
                        > I suspect you and I are being fleeced far more by government waste and fraud than by businesses.

                        And do you think that firing half of the employees at various gov't agencies has and/or will decrease the waste and fraud?

                  • c22 2 hours ago
                    Wealth is not created, it's stripped from the natural commons and it is, despite being massive, obviously finite. What you're referring to is "value creation"--the transformation of this natural wealth into a form that some other humans find valuable at a point in time. This value creation is rewarded via capitalism by the accumulation of monetary instruments which very much represent an appropriation of this wealth. If this system wasn't zero sum then we wouldn't have inflation?
                    • WalterBright 1 hour ago
                      Where was all this wealth 200 years ago?

                      Inflation is caused by printing money not backed by goods and services. Inflation is another form of taxation.

                      • c22 1 hour ago
                        The vast majority of it was stored underground as petrochemicals. A fact made immediately apparent by looking at like any chart. Is this a serious conversation?
                        • fc417fc802 1 hour ago
                          In the vast majority of cases that energy could have come from other sources, though the cost would have been somewhat higher. In the hypothetical case of solar would you still describe it as being finite or stripped from the natural commons? I suppose raw land area or 1 AU solar sphere surface area could be viewed that way but it seems reductionist to me.

                          What if I use what would otherwise be a waste product to create something people are willing to pay for? For example sawdust. Is that not value creation?

              • AngryData 43 minutes ago
                On the short term every economy is effectively zero sum. Even if you invented something magical that is worth trillions, the economy can't pull trillions of dollars out of nowhere in any short period of time without devaluing everything else.
              • rkuodys 2 hours ago
                What is the point on Musk you are making? The monetary success does not neccesserily correlate to the common good they have created. In case of Musk there is a lot of governement subsidies, lots of market manipulation and false claim s. So not all activieties that bring profit to the richest are good to the rest. And stats on inequality just highlights that trend
                • WalterBright 1 hour ago
                  > subsidies.

                  The government gave Musk a trillion dollars?

                  > So not all activieties that bring profit to the richest are good to the rest

                  I didn't say they were all good. I said he created his wealth, it was not transferred to him.

                  > And stats on inequality just highlights that trend

                  All those stats highlight is some people create more wealth than others.

              • YZF 1 hour ago
                These two positions are not mutually exclusive:

                - Over-optimizing for short term profit can hurt innovation and value creation.

                - The economy is not a zero sum game and new value is created out of thin air.

              • SquibblesRedux 2 hours ago
                Musk got a huge leg up through the government, whether it be tax credits, incentives, side-stepping regulations, etc.

                Bezos ran at a loss for so long it drove out actual and potential competitors.

                Most (or all?) of the recent titans seem like each has his own company town. (See https://en.wikipedia.org/wiki/Company_town )

                While their activities certainly fall in the realm of capitalism, and are just blips at longer time scales, it certainly feels like capitalism has been a bit under the weather for the past couple decades.

                Regarding the money invested in AI, it all gives me "irrational exuberance" vibes.

                • WalterBright 2 hours ago
                  > Musk got a huge leg up through the government, whether it be tax credits, incentives, side-stepping regulations, etc.

                  Nope. (Any government incentives were available to all the other car companies.)

                  > Bezos ran at a loss for so long it drove out actual and potential competitors.

                  Where do you think he got the money to sustain those losses? Investors! Including me. That is not a "transfer" of wealth, as in exchange the investors received an ownership share of the company.

                  > Regarding the money invested in AI, it all gives me "irrational exuberance" vibes.

                  It still is the opposite of short term thinking.

                  • SquibblesRedux 1 hour ago
                    I was not making an argument that the economy is zero-sum, or that Musk or Bezos did not build wealth. I merely pointed out methods they used to build their empires. For example, Musk did take advantage of government incentives, sidestepped regulations, etc.

                    Again, I never claimed there was any sort of zero-sum transfer of wealth. I'm simply pointing out there are varied ways to build up wealth; people have various opinions about those ways. It's right to call out misconceptions or outright falsehoods, but it's also good to understand what leads people to form or accept those misconceptions in the first place.

          • danmaz74 3 hours ago
            "Profit maximization" on its own would have left most people working 12+ hours a day 6 days a week, like it was very common in the 19th century. Luckily, it's never been the only force shaping our societies.
            • WalterBright 3 hours ago
              Working long hours was necessary in those days because productivity was much lower.

              Productivity has gone up so much people can work a lot less, and vast part of the population doesn't work at all.

              • danmaz74 2 hours ago
                Sure, productivity increase is hugely important, but if you only pursue profit maximization, then all the productivity increase goes into profits, which means that the general population doesn't increase their well being much if at all.

                The 40hr work week didn't come by as a consequence of the profit maximization mentality, but as a consequence of hard fought battles by the workers/employees against that mentality. And when I say "hard fought" I mean in the literal sense, with at least 1,000 workers killed just in the US in those days. https://en.wikipedia.org/wiki/List_of_worker_deaths_in_Unite...

                • WalterBright 2 hours ago
                  The Law of Supply and Demand is in play, and it means a company cannot dictate prices, wages, or working conditions in a free market economy. Rising productivity would have reduced the average work week regardless.

                  If you still aren't convinced, consider that the benefits package routinely offered to employees is worth around 40% of their pay.

                  • nobodyandproud 9 minutes ago
                    A free market is bound by the rules of the market, which is trade agreements and government.

                    Meaning it can be changed and adjusted.

                  • KittenInABox 1 hour ago
                    > Rising productivity would have reduced the average work week regardless.

                    Do you have evidence of this?

                    > consider that the benefits package routinely offered to employees is worth around 40% of their pay

                    Please define "routinely" and "employees". Part-time employees do not get benefits packages, much less benefits packages worth 40% of their pay. PTO, Sick time, family leave, and other "benefits" are actually legally mandated and I do not see any evidence that companies would offer this if they were not mandated to do so.

              • AngryData 38 minutes ago
                Working long hours was not necessary in those days, it was forced upon people by declining wages as profit was transferred from individuals, families, and small businesses towards the capital class. The entire movement to introduce the 40 hour work week was based on people wanting to reduce their hours towards what their grandparents worked and survived on. The entire luddite movement was based on declining wages and worsening work conditions compared to the generations before them.
              • amanaplanacanal 2 hours ago
                Depends what you mean by work.

                Most people do a lot of work themselves that the Richie Rich would pay somebody else to do, like cooking, cleaning, childcare, gardening, etc. If it counts as work when you hire somebody to do it, it should equally count as work if you do it yourself.

                • WalterBright 2 hours ago
                  People still did cooking, cleaning, childcare, and gardening in those times of 12 hour work days.

                  BTW, cooking in those days was an all day affair. The wood stove required continuous feeding and watching. Today one can just put the food in a microwave.

                  I cook a steak now and then, it's the only cooking I do. It takes about 10 minutes. The dishwasher does the cleaning.

                  Rich people hire others to do the cooking because the rich peoples' jobs pay off far more per hour of work. For example, if my profession pays me $100/hr, it makes perfect sense to pay someone $30/hr to do the cooking for me, as I am still $70/hr ahead.

                  • nobodyandproud 8 minutes ago
                    Division of labor. The men worked, the women stayed home.

                    Things went to hell for single-parent families

          • keeda 2 hours ago
            I think it's more accurate to say it is a process that has resulted in our high standard of living faster than other processes... so far.

            There is no guarantee it will keep working for the majority of us going forward; as is becoming very clear all around the world, it also has downsides especially without checks and balances (which was predicted and observed in the past, which is why other processes were conceptualized in the first place!)

            As a trivial example, profit maximization is directly responsible for the enshittification we're seeing everywhere, which definitely is negatively impacting our standard of living.

            • WalterBright 2 hours ago
              > I think it's more accurate to say it is a process that has resulted in our high standard of living faster than other processes... so far.

              Nobody has found a better process. Not even close.

              > As a trivial example

              It's not an example, it's a generalization. If you have a specific example, let's have a look at it!

              Have you ever wondered why communist countries never exported products?

              Also, if you want better products and believe people will want to buy them, go into business making them and make yourself wealthy!

              • keeda 1 hour ago
                > Nobody has found a better process. Not even close.

                Maybe, but as they say, "Past Performance Is Not Indicative of Future Results." The point was, this process may work up to a point, then it may work more against the interests of the population.

                I mean, history is already littered with examples of the downsides of this process, like all the rampant anti-worker things profit maximizers tried to get away with that had to be fought literally with blood. Without that bloodshed it is highly likely the average standards of living of the general population would be much, much lower.

                You don't even have to look at history, the process is literally playing out right now in various countries.

                > It's not an example, it's a generalization. If you have a specific example, let's have a look at it!

                I thought enshittification was a pretty specific example? You can find the many articles written about the various ways things are degrading on the Internet with a Google search (the experience of which is probably an even more specific example in itself ;-))

                I chose that example because I think it's a microcosm of this process: in the beginning, the profit motive creates great innovation and products. But at some point, like when the market is saturated or monopolized, the profit incentive creates anti-consumer dynamics because companies turn to extractive methods rather than innovative ones.

              • contingencies 1 hour ago
                Your line of reasoning misses the clear example that China pulled 1.4 billion people out of poverty creating mass-literacy before embracing capitalism.
              • striking 2 hours ago
                Which country experiencing communism at any point in time didn't export any products?
                • WalterBright 2 hours ago
                  The Soviet Union.

                  I don't recall ever seeing USSR products in stores, while plenty of manufactured goods from other countries were. (By products I meant manufactured products, not extracted resources like oil.)

                  • AngryData 33 minutes ago
                    I got some Soviet Union produced wrenches and drill from my great grandfather and East Germany made drill bits from an auction despite nobody in my family living outside the US in 120 years. No it isn't common, but I wouldn't expect the Soviet Union's biggest rival to be importing many of their products to start with, so the fact I possess them at all is decent evidence of their significant production volume.
                  • keeda 29 minutes ago
                    I think that may have been a result of the political divide of that era. The USSR did export some machinery and arms, but those were traded largely within other Communist countries and "third world" countries.
                  • Yoric 2 hours ago
                    I did, so it probably depends of where you live.
                    • WalterBright 1 hour ago
                      What Soviet products did you see?
                      • Yoric 1 hour ago
                        I remember books (there was a famous soviet science publisher, which I believe we learned later had gulag deportees working on their printing presses) and I seem to recall toys and some foods.

                        My memory from the period is far from perfect, though, as I was a kid when the USSR collapsed.

      • t-3 9 hours ago
        I think the bean counters get a bad rap for this a bit unfairly. The past century has seen more progress in knowledge and technology than the rest of human history combined. The world and business environment are changing too rapidly to make longtermist thinking practical.

        Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.

        Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

        R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.

        Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.

        • aleph_minus_one 9 hours ago
          > R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself?

          Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.

          Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.

          • SoftTalker 5 hours ago
            Those fields still develop in-house expertise and world-leadning products. General Electric was cited above, but their turbine engine division is producing the most fuel-efficient, reliable, and lowest TCO aircraft engines there have ever been. The materials science and engineering expertise needed to do this isn't something you can find in a freshly-graduated university student.

            Products like jet engines, though, are still those where quality matters. They are so costly that there's room in the finances to deliver it. Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.

            • Zak 5 hours ago
              > Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.

              A part of this is that consumers usually don't have very good information about products like that. I would almost always pay twice as much for an appliance that's going to last three times as long, but I usually can't find a review that's based on a teardown and rebuild or testing to destruction.

              Aircraft engines are subjected to both.

            • com 5 hours ago
              Not quite; for wide-bodies at least RR pips GE for fuel-efficiency, but there’s not much in it for the latest generation of power plants.
            • WorldMaker 3 hours ago
              > Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.

              Some of this seems reverse causal to me. There were many consumers interested in options other than a race to the bottom. I certainly remember 90s Consumer Reports-era consciousness of consumers trying to find the best products as they all seemed to race to the bottom.

              The irony seems to be that now that GE has sold GE Appliances they've been returning to higher quality and cutting fewer corners just because activist US shareholders wanted slightly higher dividends each quarter. It feels like only a matter of time before Heier finishes the next steps in the Lenovo playbook and stops paying GE to license their brand and stop giving credit to a US company that stopped caring about consumers and consumer product quality decades ago.

        • gopher_space 2 hours ago
          > Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

          Here's the problem with your reasoning. This paragraph is simply wrong, with each sentence being untrue. Education and training are never wasted money, the skills aren't changing that quickly, there isn't any slack in the workforce, and qualified worker shortages are being reported in every trade across the board. Someone needs to solve the problems you hand-wave away.

          > this works just fine in most cases - somebody will learn what they need to get paid.

          That's me. I specialize in learning new domains. I cost like 8x more than the random junior you'd be able to hire with a functional onboarding program.

        • danmaz74 3 hours ago
          "The world and business environment are changing too rapidly to make longtermist thinking practical." Tell that to the Chinese...
        • watwut 6 hours ago
          Universities dont do product oriented research. They do more general research. And also, they should not do product oriented research, that is companies role.

          And universities research capabilities are being destroyed too right now.

      • numpad0 8 hours ago
        > Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

        Not just that, you have to be always doing less for more gains. Real work is bad work. Shrinkflation good. I don't know what it is if it wasn't a pure scammer mindset.

        • flybrand 6 hours ago
          > Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

          This is a classic Goldratt / Theory of Constraints mistake.

      • acomjean 12 hours ago
        I’ll note at the end of the last century I worked at IBM research which had a budget of 6 Billion dollars. Management was trying very hard to get better return on that investment. Even today IBM though often ridiculed in the tech space (sometimes they do deserve it) spends a lot on R&D.
        • NordStreamYacht 12 hours ago
          Lucent at the same time went through the same issue: how to monetise Bell Labs.

          Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.

          When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.

          I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.

          • Zigurd 7 hours ago
            I've been to the Holmdel office in the decline years. It was very sad. A fraction of the former staff was rattling around in what could've been used for a post apocalyptic sci-fi set. In its heyday it must've been magnificent. Imagine taking an entire great research university and putting it into a single architectural masterpiece. I've also been to Nokia HQ after Elop ruined the place. Also sad.
        • rvba 12 hours ago
          Did anything come out from those billions?
          • swiftcoder 11 hours ago
            > Did anything come out from those billions?

            Per wikipedia:

              IBM employees have garnered six Nobel Prizes, seven Turing Awards,
              20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology,
              five National Medals of Science and three Kavli Prizes. As of 2018,
              the company had generated more patents than any other business in each of 25 consecutive years.
            • xienze 11 hours ago
              > the company had generated more patents than any other business in each of 25 consecutive years.

              A couple things about those patents, from a former IBMer who has quite a few in his time there.

              First, not all patents are created equal. Most of those IBM patents are software-related, and for pretty trivial stuff.

              Second, most of those patents are generated by the rank and file employees, not research scientists. The IBM patent process is a well-oiled machine but they ain't exactly patenting transistor-level breakthroughs thousands of times a year.

              • fao_ 10 hours ago
                Why do you need to generate transistor-level breakthroughs multiple times a year? Those breakthroughs are hard to generate, but they're important and industry-spanning. The problem is we've mostly stopped generating them.
                • xienze 10 hours ago
                  I wasn't saying anything about that, I was just pointing out that yes, IBM produces a ton of patents, but they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses.
                  • swiftcoder 10 hours ago
                    > they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses

                    We did that at Meta and Amazon too (for polycarbonate puzzle pieces, with no monetary award at all!). Every now and then something meaningful came out of it

                    • flymasterv 8 hours ago
                      I still have my “Get fucked, employee! Love, Jeff” puzzle pieces.
                      • NateEag 4 hours ago
                        What are these? I'm extremely curious.
                    • dboreham 7 hours ago
                      Not even Lucite!
              • raddan 8 hours ago
                I also worked (briefly, as an intern) at IBM and IBM’s management also sometimes undermined the R&D that happened at the company.

                I started at the tail of one research group’s mass exodus. It was like a bomb had gone off; the people left behind were trying to pick up the pieces. In essence, this group developed a sophisticated new technique, which the company urged them to commercialize. Pivoting to commercialization was a big effort, and not naturally within the expertise of this group, but they did it, largely at the expense of their own research productivity—for several years. They even hired programmers (ie, not people who are primarily computer scientists) and got it done. But just before launch, IBM pulled the plug.

                This infuriated the researchers in the group. Keep in mind that career advancement in research is largely predicated on producing new research. In effect, IBM asked people to take a time out and then punished them for agreeing to do it. The whole group was extremely demoralized. Google was the largest beneficiary of this misstep.

                I also had a similar, frustrating experience working for Microsoft, so it’s not just IBM, but the same dynamics were at work: bean counters asking researchers to commercialize something and then axing a project as it becomes deliverable.

                If AI replaces any role in the company of the future, please let it be the managerial class.

            • mschuster91 10 hours ago
              The thing is, Nobel Prizes and other awards don't pay the bills.

              Patents do, but in most cases it's trivial patents or patents for a "mutually assured destruction" portfolio (aka, you keep them in hand should someone ever decide to sue you).

              That's a fundamental problem with how the Western sphere prioritizes and funds R&D. Either it has direct and massive ROI promises (that's how most pharma R&D works), some sort of government backing (that's how we got mRNA - pharma corps weren't interested, or how we got the Internet, lasers, radar and microwaves) or some uber wealthy billionaire (that's how we got Tesla and SpaceX, although government aids certainly helped).

              All while we are cutting back government R&D funding in the pursuit of "austerity", China just floods the system with money. And they are winning the war.

              • cyberax 3 hours ago
                mRNA is not a good example. If anything, it's a demonstration of why the Western capitalist model is superior to anything else. Most of the mRNA research was funded by venture capital as a high-risk high-reward investment.

                In the world of government-sponsored research, mRNA likely would have been passed over in favor of funding research with more assured results.

            • smallstepforman 9 hours ago
              Every year they grant prizes. If hardly anyone is doing core R&D because of cost cutting, there is a higher chance those doing the smallest amount of R&D get the prizes.

              A Nobel in 2026 doesnt carry the same weight as a Nobel in 1955.

          • mandolin4 2 hours ago
            Toshiba, IBM and Siemens had a DRAM joint development program 1993-1998. Several generations of DRAM was developed there. Also, while IBM exited the DRAM business, the knowledge survived in Rambus to an extent.
      • stephen_cagle 3 hours ago
        I believe private equity ownership represents this in an aggressive form. The 2 and 20 percent takes that PE usually mandates as part of their purchase agreement means that they are highly highly incentivized to maximize short term "wins" over long term survival.

        I think Chesterton and Taleb also had pretty reasonable things to say about understanding a system before you make changes and fragile/anti-fragile systems as well.

      • chanux 11 hours ago
        > When bean-counters took over the ecosystem [...] in their mind, every part of the system needs to be firing at 100% all the time.

        This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)

      • abustamam 6 hours ago
        I haven't read this book but I see it often mentionedin contexts like this. it was written in 2001 and I think its synopsis still stands.

        Slack by Tom Demarco (2001)

        https://www.goodreads.com/book/show/123715.Slack

      • SlinkyOnStairs 10 hours ago
        They also took out all the quality, though in pure business terms one can argue that's a kind of "slack" by itself.

        The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.

        And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.

        Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.

        We've seen it happen to small electronics and general goods.

        We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.

        ---

        And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)

        E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.

        SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.

        ---

        And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.

        Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.

        • ekidd 8 hours ago
          > E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.

          Desktop Linux has gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: https://www.linux.com/news/23-years-terrible-linux-predictio...

          And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was Tux Racer.

          But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.

          By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)

          The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.

          Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.

          • SlinkyOnStairs 3 hours ago
            > Desktop Linux has gotten better

            This is on me for being a bit too snarky.

            So yes, Desktop Linux has "gotten better". What it hasn't done is solved any of the systemic problems.

            The Open Source development quirks that created the shitshow of the 1999 is still here. Gnome is better but still suffers massively from mainstream features being declared stupid by the maintainers. (A power button that turns off the machine? Heretical.)

            Valve's recent successes are pretty illustrative here. They used their money to directly hijack the projects their products rely on.

            For what it pertains the comparison, Windows is not without this "slow" improvement either. 95 and 98 are lightyears behind contemporary Windows in so many ways. Until quite recently it still made about as much sense to use Linux as it did back then; Not much.

            Take your Linux Laptop example. Sure, Linux finally kind of worked on some specific models that were tested for it. Meanwhile, Windows had moved from "it'll work with some mucking about with drivers" to "It works universally, on practically all hardware". Really, by the mid 2010s Windows would finally be quite tolerant of you changing the hardware.

            Hence my original point; Desktop Linux hasn't really caught up with Windows in any meaningful sense. Windows is just nose-diving into the ground in the last few years.

            • noisy_boy 2 hours ago
              > The Open Source development quirks that created the shitshow of the 1999 is still here. Gnome is better but still suffers massively from mainstream features being declared stupid by the maintainers. (A power button that turns off the machine? Heretical.)

              Gnome have been chopping off their own limbs because it reduces weight. All in the name of simplicity. I think they are not the best example of Open Source development.

              KDE on the other hand had a hard fall once and basically recovered and invested long term in Plasma and that has paid off handsomely. Today, it is one desktop that I can say is closest to typical/standard desktop paradigms out of the box while retaining a high degree of flexibility for those who choose to customise it. I have been using KDE on Fedora for a while now and it has been basically solid.

              • SlinkyOnStairs 1 hour ago
                > I think they are not the best example of Open Source development.

                They're not. I'm using them as an example of the "bad" in Open Source development.

                But it's also not so much the individual OS components that are a problem, their interactions are just as fragile and usually subject to neither party taking ownership of the problem.

          • tim-projects 7 hours ago
            For some reference back in Ubuntu 6 days around 2005 I switched. It took me 2 weeks to get X Org to run with my nvidia card at the time. 2 weeks of messing with config files. I only persisted because I was so sick of windows.
        • TeMPOraL 7 hours ago
          > And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. (...) They can't move back upmarket after that's done.

          The knowledge isn't the problem. It can be quickly regained, and progress of science and technology often offer new paths to even better quality, which limits the need for recovering details of old process.

          The actual problem is, there is no market to go up to anymore. Once everyone is used to garbage being the only thing on offer, and adjust to cope with it, you cannot compete on quality anymore. Customers won't be able to tell whether you're honest, or just trying to charge suckers for the same garbage with a nicer finish, like every other brand that promises quality. It would take years of effort and low sales to convince the customers to start believing you're the real deal, which (as beancounters will happily tell you) you cannot afford. And even if you could, how are you going to convince people you're not going to start cutting corners again a few years down the line? In fact, how do you convince yourself? If it happened once, if it keeps happening everywhere around across all economy, it's bound to happen to your business too.

          • pocksuppet 3 hours ago
            Wrong on the first point, right on the second. Institutional knowledge can't be easily regained. To build up the knowledge to, say, make a transistor, you need a bunch of people experimenting with a bunch of things. Published scientific papers and patents will get you part of the way there, but the final stretch is still up to you, including things like which equipment to buy, purity of supplies (and where to get them!), how long the chip needs to be bombarded by each kind of particles, how much air the cleanroom needs to move. All the tiny details. You have to discover them by trial and error. Actual chip manufacturing companies have found themselves unable to get good yield until they copied the floor plan of another working fabrication plant, and they still have no idea why that mattered, but that's an extreme case. Maybe nobody expected miniscule air contamination from one process step was affecting another nearby process step, and in the original plan they were farther apart.

            Yes if you want to wire a neighborhood for internet you can skip DSL and go straight to fiber. That's not the problem. The problem is that nobody in your company knows how deep to put the fiber to minimize problems, how much redundancy is needed, how strong the mechanical armor around the fiber needs to be, how many fibers per cable to meet future capacity needs without excessive costs, which landlords are friendly to you, nobody has the right connections to city hall to get digging permits approved expediently, and so on.

          • applied_heat 4 hours ago
            10 year warranty on appliances instead of 1 would show the manufacturer was serious about quality !
          • SlinkyOnStairs 3 hours ago
            > It can be quickly regained

            I'm not sure what you mean with this?

            Sure, hypothetically e.g. any western car manufacturer could poach a bunch of BYD employees. But it's not really practical for most businesses.

            > The actual problem is, there is no market to go up to anymore.

            This is the "Market for Lemons" problem, yes.

            It's less of a problem than you might think. Convincing the entire wider world that you're legitimate is a problem. One made infinitely worse by store marketplaces like Amazon preferring to push "aqekj;bgrsabhghwjbgawrjwsraG" brand garbage.

            So you just don't. The trick is to start small. The smallest you can sustain. (This doesn't work for cars, or anything that's sufficiently complex. You won't be taking on Salesforce.)

            But so long as you can find a market niche where there's demand for quality, you can carve out a living, and from there, scale up.

            The problem with that is twofold: Venture Capital has supplanted other forms of investment and "small business generating single digit millions in revenue" is utterly unappealing to VCs, even though the investment required is downsized accordingly.

            And problem #2: The cost of starting a business is too high right now. Real estate and cost of living just make it unaffordable to even try. + Healthcare if you're in the US.

          • auggierose 4 hours ago
            So, is that a market failure, or is that the market functioning as intended?
            • rawgabbit 2 hours ago
              Market failure. Customers cannot differentiate between quality and crap due to no fault of their own.
        • GorbachevyChase 6 hours ago
          I think you’re not blaming political leadership enough. NAFTA, and other programs were always going to lead to the state of affairs we have now. This was a choice. Blaming greed is like blaming gravity.
          • gzread 5 hours ago
            Are you saying we need more tariffs to have quality?
        • the_af 5 hours ago
          > Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.

          Desktop Linux mostly works these days. It does everything most regular people would want of it, with zero fuss. Including playing games. In some respects, it's easier to use than Mac or Windows.

          When it has trouble with some things, one must remember neither Mac nor Windows is perfect, and they can be extremely frustrating at times.

          Time to update those prejudices!

        • esseph 9 hours ago
          > E.g. Desktop Linux has always been kind of a joke

          And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).

      • ahussain 3 hours ago
        Engineers seem to think business people don’t know what they are doing, but if your post were true, then companies would add slack to outperform their competitors.

        The broken system likely doesn’t have enough business impact to justify the investment to maintain it.

        • array_key_first 14 minutes ago
          It's a measurement problem, which engineers also fall prey to, perhaps even more.

          It's the danger of data driven decision making. Cutting people and resources right now gets you a measurable gain. Not cutting them gets you a gain tomorrow.

          But, that gain is unmeasurable! Because in order to measure it you would need to know what happens in an alternate universe where you cut those people. So, if you're only making data driven decisions, you would cut the people 100% of the time.

          But that's why companies aren't run by algorithms, they're run by people. The algorithm would run the company into the ground.

        • ozim 3 hours ago
          Adding slack works over years.

          Cutting slack gets you quarterly bonuses.

          When you plan working 3-5 years in a single company you don’t care if it crashes and burns month after you leave just to burn down next one.

          Conversely we see the same dynamic with engineers, they build stuff to prop up their CV and don't care if company still supports crap they did after they leave.

        • idle_zealot 2 hours ago
          > companies would add slack to outperform their competitors.

          I think if they did this they'd get buried by the market. Your slack is someone else's opportunity to undercut you. It's a systemic problem, it's in every individual's self interest to work towards instability.

        • wredcoll 3 hours ago
          This would be true if everyone was optimizing for the same thing.

          It's not terribly difficult to imagine someone optimizing for, say, a bonus at the end of the year.

      • WalterBright 4 hours ago
        > optimised immediate profitability over everything else

        Which is the usual complaint that businesses are focused on short term results, sacrificing long term results.

        If that would be generally true, the stock market would be going down steeply, not up, as stock prices are based on expectations of future profits.

        • mikem170 4 hours ago
          Are stock market profit expectations mostly long term? Stock markets have been wrong before.

          Besides that, the U.S. stock market went up over several decades while manufacturing capabilities were transferred overseas. That has had, and will continue to have, domestic ramifications that might not be captured by investor profits.

          • WalterBright 3 hours ago
            The stock market has been going up for a couple hundred years. When do you expect the pervasive short term thinking will crater the economy?
        • wredcoll 3 hours ago
          > as stock prices are based on expectations of future profits.

          I thought stock prices were based one what I thought I could sell it for next week.

          • WalterBright 3 hours ago
            If you could sell it for more next week, that means the next week buyers are expecting higher profits.
            • wredcoll 3 hours ago
              No, it just means they're expecting someone else to buy it from them at a higher price. See also: bitcoin.
              • WalterBright 3 hours ago
                Bitcoin is not a company selling a product.
      • pembrook 4 hours ago
        > Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

        > You are spot on w.r.t every assertion you've made.

        Huh? What happened to the concept of "debate" on HN. It's just a bunch of people agreeing with each other. Yet the data doesn't support any of OP's thesis.

        Here's a chart of the rise in productivity per hour worked in the United States since 1947. It's a steady linear increase every single year: https://fred.stlouisfed.org/series/OPHNFB

        Yours is the type of story big company workers tell themselves to feel important while refusing to learn anything new and never taking any risks. But the truth is 99.999% of companies are not doing anything that unique or complex. Most companies are not ASML.

        If I had a nickel for every time I've heard someone justify their do-nothing position within a giant bureaucracy while saying the phrase "institutional knowledge" I'd be rich. This is just a sign of a poorly run giant company full of engineers building esoteric and overly complex in-house solutions to already-solved problems as job security.

        The truth is all of this "institutional knowledge" is worthless in the face of disruption, and it has a half life that's getting shorter every day.

        Everybody talks shit about global just-in-time supply chains and specialization...but just because we had a fake toilet paper shortage for a few months during a 100-year global pandemic doesn't mean running things like it's 1947 for the last 70 years would have been better. You enjoy a much higher quality of life today due to these "evil" JIT supply chains which it turns out are far more durable than people want to claim.

        • jdw64 3 hours ago
          US aggregate productivity metrics fail to address this nuance. There is a fundamental difference in abstraction layers between a macro-system becoming more efficient and an individual enterprise experiencing operational failure. As a software engineer, distinguishing between these layers is critical. Your argument is akin to claiming that because the Google Play Store sees a higher volume of app releases (increased productivity), the intrinsic quality of individual apps has naturally improved.

          In this analogy, the individual app represents a company, and the Play Store represents the broader US market. Silicon Valley’s highly liquid labor market allows talent to flow freely, which opens up and elevates the baseline of the overall market. However, that is entirely distinct from the fact that individual companies are suffering severe drops in internal quality and productivity.

          Furthermore, in software architecture, 'productivity' and 'quality' are rarely directly proportional. With AI coding tools, we can ship an app orders of magnitude faster. Historically, it took me three months to write 60,000 lines of code; recently, I am generating that same volume in just two weeks. My productivity has undeniably spiked, but can I confidently claim the code quality is better than when I manually scrutinized every single line?

          The real issue is not whether the broader economy has grown more productive since 1947. The core issue is whether a specific organization bleeds capability when the exact people who understand its real-world constraints, failure modes, and operational history walk out the door.

          Both realities can co-exist: National productivity can trend upwards, while individual companies simultaneously suffer operational regressions due to botched migrations, failed refactors, or the loss of tacit knowledge.

          I agree that 'institutional knowledge' is sometimes weaponized to defend unnecessary complexity. However, the opposite fallacy is treating all localized, domain-specific knowledge as worthless. While some of it is merely job-security folklore, the rest is literally the only surviving documentation of why the system functions in the first place

        • gzread 1 hour ago
          Most measurements measured in dollars are just stealth measurements of inflation. Even inflation adjusted measurements, because official inflation metrics are always lowball numbers with shady methodology.
    • netcan 12 hours ago
      >. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

      This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.

      Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.

      It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...

      If there were known "make more software, make more money" opportunities available, they would have already done them.

      Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.

      The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.

      A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.

      • lazystar 7 hours ago
        > doesn't generate any for later.

        "any" is quite an assumption.

        • netcan 4 hours ago
          I didn't mean this as an absolute statement. Relatively, and in the short term.
      • Terr_ 12 hours ago
        > If there were known "make more software, make more money" opportunities available, they would have already done them.

        Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.

        Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.

        • netcan 10 hours ago
          Ok... but this just generalizes into the "known things" type.

          Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.

          More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.

          A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.

          "Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.

      • lo_zamoyski 1 hour ago
        I agree that any sufficiently complex human operation - whether industrial or scientific or whatever - requires a culture and a living tradition that develops over time and communicates knowledge and understanding across generations. In fact, many problems in our culture can be attributed to a contempt for tradition that developed. (It is true that tradition can ossify. That's can be a problem with attitudes toward tradition rather than tradition itself, or a sign that something needs to be addressed. A good tradition is a dialogue spanning history.)

        However, it is also true that technology develops and produces changes that in the short term cause pain, but in the long term produce a better outcome in some desirable sense. Coding is not an end in itself. Just as switchboard operators and human computers are obsolete, because the conditions that caused the need for them ceased to exist, it may be the case that a certain manual style of programming is also becoming obsolete.

        You can imagine human computers decades ago thinking that computing technology is bad, because people will loose numerical facility. But this misunderstands the structure of the value of practical skills and the difference between knowledge of principles and practical skill. Sure, few if any people today can perform numerical computation as quickly and competently in their heads or on paper as human computers, but...

        1. that's different from understanding the principles of computation which is closer to a theoretical grasp and has eternal or at least lasting value

        2. the value of the practical numerical facility was rooted in the need for obtaining results as quickly as possible, and that particular set of techniques or skills is no longer practical

        Perhaps manual coding is like that. I don't know why people are surprised. Generative programming has always been a desired end in CS for along time. CS grads can still and should still learn the principles of their field and learn them well, but the profile of practical industrial techniques and needed skills is changing. As software eats more and more of the world, it is becoming increasingly impractical for manually fiddling with silly bits of plumbing. We obviously haven't been able to develop abstractions well enough to avoid it, and part of the reason is that appetite comes with eating. Once you make something easier, it makes it easier to achieve even greater things more easily...hence new plumbing and implementation complexity.

        Let's be honest here. Much of programming is intellectually dull. It's is plumbing. It's not algorithmically interesting. It's not interesting from a modeling perspective. It's not interesting conceptually. It's not interesting as a matter of system design. Most programming out in the wild is the same old crap being recapitulated a million times over. If all you want is to become skilled in doing the same thing over and over again, then I can understand why you might find LLMs threatening. Your market value as a maker of yet-another-flask-web-app has plummeted hard. People who enjoy that kind of programming are generally not very intellectually motivated people - at least not where programming is concerned - and likely prefer the tedious comforts of rehearsed ephemeral detail. LLMs can keep us from rabbit holing and focused on the domain.

        In any case, I don't think LLMs are a threat to the field per se. I just think that the skill set is shifting and developing. I think we are still figuring out what it means to develop the right understanding and intuitions to develop software without the benefit of having had done it manually. Time will tell. However, I also think being able to read code has become relatively more important than writing it. When you have to verify the quality of LLM-generated code and put your name behind it, you have to be able to understand it, and that's a somewhat neglected skill in my view. Programming very often prefer to write code than to read it. LLMs might be just the thing to coerce an improvement in the latter sort of literacy. With this also comes a greater importance of formal specification. That's where I would expect the future of the field to shift.

    • d3Xt3r 15 minutes ago
      > The problem is a management pattern

      No, the problem is much more far-reaching than being limited to just corporations - it's a societal problem.

      The article says the west is forgetting how to code, but actually the west is forgetting how to do math, how to draw and edit images, how to make music, how to write, how to read and even how to think.

      And have you interacted with any kids recently? They're doing ALL their homework using ChatGPT. Forget kids, adults are even worse. At least kids can be supervised, but who's supervising the adults? How many of us have enough self-control to not reach out to the convenient AI app in our phones for every little thing?

      This has massive repurcarions for our society as a whole. The bleak future depicted in Idocracy is becoming more and more of a surefire reality.

    • aleph_minus_one 9 hours ago
      > The core problem is that decision-makers—often far removed from actual engineering work — believe that tacit knowledge can be replaced with documentation, tools, and processes. [It] cannot.

      I am not so certain:

      For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.

      The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.

      So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.

      This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.

      • Joeri 4 hours ago
        The deeper I wade through Microsoft’s Azure documentation the more I feel the reality of this. There’s so much of it that it basically is unreadable in real terms, most employees will never get the time allocated, and when you do try to exhaustively read up on a specific area you find that the documentation is incomplete and wrong in subtle but important ways. I’m sure Microsoft spends a lot of resources on that documentation, but it seems somewhat of a hopeless mission.
      • chanux 8 hours ago
        There are certain things that are too obvious to some person at a given time. Hence they would not consider it's worth documenting. Some of those things are important bits and pieces of the theory[1] of the program.

        [1] https://pages.cs.wisc.edu/~remzi/Naur.pdf

      • ianstormtaylor 5 hours ago
        This is such a weird counter-argument, that only serves to prove OP’s point.

        “It’s not that it’s not documentable. It’s just that it would take tens of thousands of pages and no one would be able to write that or read that to effectively take over the project.”

        Okay, so surely this is what OP had in mind when they said documentation doesn’t work… Is it no longer safe to assume reasonable expectations when making an argument? Why the need to “well actually” them with this response?

      • torginus 6 hours ago
        I think it's an important property of a system to be documentable not just documented. What I mean essentially, is the system was designed with sound principles, and said principles were written down and followed.

        I have seen this work only once in my life, and it was so nice to see, but yeah, most code is just a ball of twine, and even if there was a guiding principle beneath, it has been long abandoned, and overruled, and the only way to understand the system is to take it all in at once.

        • everforward 5 hours ago
          I think it’s reasonably easy to design a system that’s documentable and documented. It’s very, very hard to maintain and iterate on a system while maintaining those properties.

          Hacky things will make their way in because it takes a month to do the documentable thing and a week to ship the hacky thing.

          It takes a lot of skilled people from varying disciplines to figure out what things are going to survive long enough and be important enough to spend the resources doing the right thing instead of the hacks.

          It bites both ways. I’ve seen core business products crippled by years of digital duct tape, but I’ve also seen internal tooling that never really becomes useful because they insist on doing the “correct” thing and it’s constantly a year behind what we need it to do.

      • iugtmkbdfil834 8 hours ago
        << [belief that] knowledge can be replaced with documentation, tools, and processes. [It] cannot. << volume of documentation that I would have to write would be insane

        I am not sure those are mutually exclusive. We all know if situations where a person knows of tiny and typically undocumented system quirks. We even have a corporate name for it: institutional knowledge. The issue is that executives think it can ALL somehow be done, when even cursory real life project lift will quickly teach one how insane average gap between documented and undocumented tends to be. Add to that near constant changes to API, versions, systems, people and I can't help but wonder at executives, who really do think this way.

      • wcfrobert 4 hours ago
        But you've just perfectly described the tacit knowledge problem.

        Yes, you can spend all your time writing docs, or just mentor a junior and let them grok the system through osmosis.

        Also your doc won't ever have 100% coverage unless you write an absolute tome. Tacit knowledge are things that are so obvious that you wouldnt even think of writing it down in the first place.

      • paganel 8 hours ago
        It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading (even tens of thousands of pages of) documentation, that is the crust of it.
        • aleph_minus_one 8 hours ago
          > It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading

          I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.

          If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.

          In other words: there mostly exist two kinds of tasks:

          - easy, regular adjustments

          - deep changes that require a really good understanding of the system

          • jakub_g 8 hours ago
            I tend to document some tricky non-obvious pieces of knowledge directly above the relevant code. "We have to do X below instead of obvious-first-idea-Y because Z".

            Any time a refactoring comes up which moves code around, AI (or my coworkers) remove those comments without thinking twice, and I need to tell them "hey this is still valid".

          • flyinglizard 8 hours ago
            It's kind of a learning JIT. It's no use to go through and memorize something you don't need in the short term. It's hard to memorize well and by the time you need to draw on the knowledge it's already hazy. This is why you can think of such documentation more as a reference manual and not just plain documentation.

            In any case, AI is great for traversing a codebase and producing at least a draft of such documentation.

        • chrisweekly 8 hours ago
          crust (edge/border) -> crux (heart/essence)
          • paganel 5 hours ago
            Thanks , am on phone with a tiny input. I was most probably also a little hungry.
    • adamddev1 31 minutes ago
      > The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

      Exactly. In direct contrast to this would be how Xerox and Bell funded laboratories just to pursue knowledge, without demands of profit. They ended up creating incredibly profitable things when driven my knowledge, and not profit.

      I also read a book about math where the author argued that while the Greeks were driven to pursue truth for truth's sake, they ended up being far more productive and innovative. The Romans who were more driven to work for solutions to immediate practical needs, ended up being not so productive and innovative. He used this as a defense for efforts in pure math that seems to have no immediate application but ends up being massively, surprisingly powerful and productive for practical applications down the road. I think the same could be said for software development focussed on truth and correctness, rather than immediate productivity.

    • Fr0styMatt88 11 hours ago
      I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

      The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

      It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

      The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

      I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.

      • MattJ100 10 hours ago
        We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

        Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

        Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

        In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

        • strange_quark 4 hours ago
          > In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

          The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.

          I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.

          • NateEag 4 hours ago
            So have you stopped using ChatGPT and Claude?
        • ethagnawl 5 hours ago
          This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).

          I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)

        • torginus 6 hours ago
          I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.

          From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.

          I just reset the router, actually took the time to do everything by the docs, and then it worked.

          Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.

        • 2ndorderthought 10 hours ago
          Personally this type of behavior played a large part in why I left 2 oss communities.

          A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

          They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.

          • skydhash 9 hours ago
            I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans?
            • 2ndorderthought 9 hours ago
              I should look into mailing lists. That would be a great filter for the "I need it now at any cost" interactions. Thank you for the indirect advice.
      • notnullorvoid 5 hours ago
        People are losing their ability to reason without prompting an LLM first.

        It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.

        I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.

      • musebox35 1 hour ago
        I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.
      • lxgr 10 hours ago
        > if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

        Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.

      • 2ndorderthought 10 hours ago
        There is a lot of wisdom in this.

        At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

        It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.

        • avmich 9 hours ago
          > At the end of the day chatgpt won't be there

          Are you sure it won't?

          • 2ndorderthought 8 hours ago
            Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways.
      • gonzalohm 9 hours ago
        I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts
      • gzread 42 minutes ago
        Apps like Doordash have introduced me to many good restaurants which I've then visited in person.
      • hnthrow0287345 6 hours ago
        You could have done this with Google search or Wikipedia or reading through books though
      • croisillon 9 hours ago
        i see what you did there :)
    • throwaw12 9 hours ago
      This shows Western government system is broken.

      In ideal world (where we don't live):

      * Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)

      * Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)

      * Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)

      In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies

      • sph 8 hours ago
        > In the west, government is optimizing for "loads and loads of moooney"

        More appropriately, government is optimizing for 4 year electoral terms. No one cares about longer timescales necessary to tackle hard problems.

        This is where autocracies like China, or monarchies for example, win over democracies.

        • mancerayder 4 hours ago
          Counter-examples are France and Japan. Democracies, electoral terms. High-speed rail that the world looks up to, investment in infrastructure everywhere. In France you have Grand Paris, a programme to transform the suburbs into denser housing and commercial space, a calculation and planning that INCLUDES public transport.

          And the green initiatives in France. These, transit, Grand Paris, and much more are initiatives that take many years to realize.

          Now let's move over to New Jersey and New York City. The most densely populated state (NJ) has some of the worst transit despite being in the NYC greater metropolitan area. An old tunnel between the two needs to be replaced, but politicians with four year mental horizons canned it until recently (ARC project). Infrastructure is a fight between Federal, two states and a city politically and partially from a funding perspective.

          We could go on, but I just wanted to point out that the United States is a poor example of good governance. And that we don't need to live in a totalitarian nightmare just because we acknowledge the US fails to produce innovation and investment for the public good.

          And let's not talk about debt, as if it is a unique problem to France or anything new.

          • yason 1 hour ago
            Counter-examples are France and Japan. Democracies, electoral terms.

            A democracy doesn't prevent long-term planning if only the electorate appreciates long-term projects. Democracies can build stuff across parliaments if differences between parties aren't so overwhelming so as to the majority of them can agree on developing something even if the relative power of elected parties varies over terms.

            In a lot of countries the major parties agree on many core views and social code because they share a common nation/society, and the political differences happen merely on the edges or linings of the value spectrum. A government of highly polarised parties voted by a highly heterogeneous pool of voters is not ideal for long-term efforts.

            • mancerayder 1 hour ago
              Strong words, but NJ and NY have had Democrats in power continuously for a while. While Christie, a Republican (and a corrupt person) torpedoed the ARC project I mentioned, the other issues seem more related to a lack of centralization.

              Something like public transit that powers a region that's a sizable fraction of US GPD should not be a state affair in the first place. What Europeans have that Americans lack are perma-technocrats with a long term vision that are entrenched in power.

        • hermitcrab 4 hours ago
          >This is where autocracies like China, or monarchies for example, win over democracies.

          Autocracies like China, are able to plan longer term. But, because they don't regularly change their leadership like a democracy, the leaders become old, tired, schlerotic and surrounded by 'yes men'. Hence "Democracy is the worst form of government, except for all the others.".

        • markus_zhang 6 hours ago
          I think that has something to do with the prerequisites of democracy.

          I believe one important factor for a democracy to work properly, is to have a large number of citizens who 1) can stand up and push back when they feel something is wrong, and 2) is sufficiently knowledgeable. We don’t have that anymore. Of course I’m also to be blamed for that.

          • mjevans 6 hours ago
            Democracy requires informed thoughtful voters to function.

            Public education was supposed to deliver that. This is a dream that has failed in the US.

            Possibly the most lacking tools are Critical Thinking (not directly taught as a subject AFAIK) and some class with a focus on how government(s) work. The latter was an elective I took in high school (not a core requirement, it should be).

            At least when I was in college it helped to have critical thinking skills, but was not a basics (100 level) course. Political studies might be a different degree, but again not a core course. I find that ironic since everyone has to interact with government regulations and vote.

        • throwaw12 7 hours ago
          Western democracy is very interesting.

          Corporations promote people to Principal or distinguished engineer only when they prove their worth by running long running large scale projects.

          But when it comes to governing the whole country: lobby, marketing and boom, you are a president for next 4 years, which is anyway not enough to deliver anything big and see the impact. (Except the destruction, destruction is easy to cause)

        • hrimfaxi 7 hours ago
          I wonder what longer cycles with easier recall methods might yield.
          • torginus 6 hours ago
            I dunno if cycle length is the key here, the Soviets and the Chinese went with five-year plans, and done properly, it seems like thats a long enough amount of time to accomplish very important things.

            WW2 took slightly less than 6 years, when we count it from the invasion of Poland to the fall of Nazi Germany.

            The moon landings took little less than 7 years, so I don't think we are terribly off by the timeframe.

            Considering the world's been getting faster (just think about how different the US was before Trump took power a bit more than a year ago), I think 4 years is fine.

        • fullshark 7 hours ago
          It's also where autocracies fail spectacularly and lead to decades of misery for their citizens.
        • kibwen 6 hours ago
          > This is where autocracies like China, or monarchies for example, win over democracies.

          This is the wrong characterization, and in fact it's where monarchies lost out to democracies. Without an organized system of replacement in response to poor performance, autocracies with a poor leader are stuck with that poor leader for life. Ask North Korea how that's going. The upside is that if you have a brilliant leader, then you also get the benefit of that brilliant leader for life. The variance in an autocracy is absolutely huge, and that's their weakness in the long term. Democracies take the edge off, and are intentionally designed to have both less upside and less downside, trading performance for stability. Xi Jinping looks good comparatively because we have gormless losers like Trump and Biden to compare to him to, but he makes plenty of his own mistakes as well (the whole Taiwan situation is a unforced error driven by his own ego, similar to Putin with Ukraine), and we've seen historically what China looks like when it's stuck with a shit leader for decades (Great Leap Forward, anyone?).

        • techpression 7 hours ago
          I think of the four year cycle as one year to whine about the previous (if different) government you took over from, two years of governing and the last as a ”get ready for election”. So in the most optimal scenario you get three ”peaceful” years. It’s very few things that can be done well in three years at ”ruling a country”-scale.
      • markus_zhang 6 hours ago
        I always think that’s the failure of citizens, not just the officials. Eventually history is going to blame us for not taking action, not pushing back, and pretty much sleep tightly when things fall apart around us.
    • throwaw12 10 hours ago
      > The problem is a management pattern .... Short-term cost cutting

      Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.

      It works fine with machinery and materials, but not with humans.

      When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.

      Humans also break in different ways:

      * They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought

      * Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually

      This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.

      Next is the knowledge, more you outsource, more you lose it

      • gnz11 6 hours ago
        Perhaps US companies should invest more in their employees then? Advancement, promotions beyond %1-3% COLAs, career paths, etc would go along way to keep employees interested in seeing their employers succeed instead of jumping ship every couple of years. The would require some effort from the C-suite however and since they jump ship every few years as well, I don't see that changing anytime soon.
        • mancerayder 4 hours ago
          Unfortunately the Wall Street accountants who run our companies don't mind if you jump ship after your 2% 'reward' raise. Because when someone new comes and costs 10 % more plus recruiting costs, that latter person has 'proven' their worth in the market, similar to when a house goes up in value due to scarcity.

          If you were to explain the costs of knowledge lost, of training, of taking a risk on a new unknown person, of relationships, there's no answer because it doesn't show up in any operating expense worksheet.

          What you're supposed to do is find another job, and explain that you love this job so much, but the other offer is really good, can they come up close to it and you'll stay. Repeat this every few years or find a new job and move to it.

        • throwaw12 6 hours ago
          Invest in employees is very broad statement.

          Before investing to employees I think it should revisit management practices and strategies, which starts in MBA and university.

          Instead of teaching how to increase shareholder value in the short term, it should also teach how to increase value to the society in the long term as well (and focus on it highly) - not just say: if you win society wins kind of generic fluff.

          Without changing management strategies everything becomes short term after a while

          • ranger_danger 5 hours ago
            > teaching how to increase shareholder value in the short term

            Problem is this is quite often a requirement for a public company, to maximize company value/profits above all else.

            Also execs who are more right-wing are typically not interested in helping the larger society in general.

            • Kamq 2 hours ago
              This is BS. A shareholder lawsuit against the CEO/board/executives for investing in the employees in the hope of long term profits would never succeed. The idea of a fiduciary duty doesn't mean that. It means the CEO can't take actions that intentionally hurt the company.

              And there are very few large public companies with active enough investors to oust a CEO over this, and even fewer that have both active and activist investors that would be interested in such a thing.

              • ranger_danger 3 minutes ago
                There can be other valid perspectives than your own, especially when one makes sweeping black-and-white generalizations without any evidence.

                > the CEO can't take actions that intentionally hurt the company

                And who gets to define what "hurt" means?

    • osigurdson 3 hours ago
      Agree. There is so much focus on "let's do the same thing we are doing now with fewer people". It is very boring and uninspired. How about "let's do something that we couldn't do before", instead?
    • samiv 12 hours ago
      Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter? Sheesh..worst case scenario is that the work done now will benefit someone else when they've already left.

      Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.

      • baq 11 hours ago
        > Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter?

        Vesting schedules, conditional grants, contractual equity ownership requirements

        • cucumber3732842 8 hours ago
          >Vesting schedules, conditional grants, contractual equity ownership requirements

          In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.

      • derf_ 11 hours ago
        > ...any actual product RD and support is a real annoying cost that just cuts into the profits...

        Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."

        It is a very difficult mindset to argue against.

      • BoingBoomTschak 10 hours ago
        Would be interesting to get a law that says that all positions supposed to take long-term decision should be paid with X% of their salary in (non-redeemable until Y years?) stocks.
    • bsenftner 9 hours ago
      That 'real issue' is the lack of formal effective communications training across the board in the United States, and probably all of Western Culture.

      The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.

      That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.

    • cjfd 12 hours ago
      This sounds all true to me, but I think there is more. It is not just decisions by management, it is also the wider economic context. Low interest rates and, for the US, having the world reserve currency as your own currency both seem to make many of these changes attractive or even inevitable. Low interest rates lead to 'innovation' which I put in scare quotes because besides real innovation it can also mean something that passes as innovation but in the end just turns out to be a bubble of stuff that was not valuable enough. The 'innovation' then crowds out investments in more boring sectors like manufacturing. This is also not good for the population in general because fewer jobs are left for people who are not suited for working in highly 'innovative' sectors.
    • Lio 11 hours ago
      There’s even a management tutorial game which demonstrates the dangers of removing too much slack from systems.

      It’s called The Beer Game[1].

      One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.

      Short-termism is the death of companies.

      https://en.wikipedia.org/wiki/Beer_distribution_game

      • dragontamer 10 hours ago
        Wut?

        The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.

        • wry_durian 9 hours ago
          If "winning" the beer game means not overreacting to short-term signals, then you can view that as a form of slack. You're sometimes paying a bit extra to hold onto something that you have no immediate short-term use for.
          • dragontamer 3 hours ago
            You are supposed to play the beer game twice. Once where you use the obvious slack and buffering.

            And a 2nd time where you minimize buffering and use just in time inventory management.

            People who have played the game knows which one outperforms. By significant margins.

        • avmich 9 hours ago
          I'm not sure it's the same kind of buffering. I would assume the "winning" strategy for the case when the known final demand is fixed is to maintain fixed the upstream orders, and buffer outcome, and for non-fixed final demand is to model that demand as good as possible and keep upstream orders accordingly to maintain outcome matching the demand model. Large penalties for buffering may make this approach not working, I guess...
    • notnullorvoid 5 hours ago
      It's much more than a management problem, experienced software engineers are actively opting into apathy and atrophy of their craft.

      I see many peers getting worse in their abilities. It's especially disheartening to see people I admired for their problem solving devolve into someone who delegates more and more of their reasoning to LLMs. It really negatively affects working with them. If you have a concern or criticism of "their" approach to a problem they either dismiss it off hand as invalid, or they go discuss it with their LLM of choice making themselves a bottleneck to collaboration.

      As the article suggests I suspect we're in for a real dark age of software as companies struggle to know who to keep, if they can even trust that those who have vital knowledge and skill today will retain it going forward.

    • Aurornis 3 hours ago
      > The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

      You need some experienced people around, but companies that rely on institutional knowledge to get everything done have always been doomed to fail.

      Even before AI, turnover was a real thing. People churn jobs a lot in tech even when the pay is good. They get bored and jump companies, leave to join their friends' startup, or move to another city.

      Every company I've worked for that operated on a belief that institutional knowledge was king and documentation and processes couldn't replace it eventually had to face the music when key employees left. Ironically this problem was at its worst at a company that compensated very well, because those key employees would often realize they had enough money to retire early or go take some risky startup job instead of sticking around to be the insitutional knowledge base.

    • AdamN 1 hour ago
      I'm seeing that in Big Tech now - there's no room anymore and it's super short term thinking (couched in the language of long term thinking). It's a really dangerous game for an incumbent to fritter away the ability to innovate because there is only enough capacity to focus on the here and now.
    • fsloth 8 hours ago
      "McKinsey comes to town".

      Basically same shaped taylorism-derived industrial management has imposed itself as the "default dogma" in private and public administration.

    • Zigurd 7 hours ago
      This behavior is strongly incentivized by the fact that recruitment and on boarding and training costs don't show up in the quarter, or maybe even the fiscal year, where layoffs are made. You can also hide a bit of age and wage discrimination in layoffs and intentionally dumb down your organization to goose up the quarter a bit more.

      Quarterly financial reporting is an obvious target for a rethink. Managers get instantaneous readings from dashboards, but they also like the room for shenanigans that quarterly reporting to shareholders enables. It's going to be hard to get management to give up information asymmetry.

    • thegrim33 5 hours ago
      The way the system is supposed to work is that companies that make bad decisions fail, and provide room for companies that do not make bad decisions to appear or grow bigger. Which works as long as you have an environment with fair competition where people are free to start and grow companies without running into entrenched interests or undue hardship.
      • RataNova 4 hours ago
        That's true in theory, but I think the hard part is timing
    • qwertywert_ 4 hours ago
      It's interesting because i find when I'm less busy/stressed at work is when I spend more time motivated, doing better work, and fixing issues that otherwise would get left behind.
    • giantg2 6 hours ago
      The real issue isn't the management pattern. The real issue is outsourcing. Offshore The manufacturing and coding and you wont have the facilities and personnel to do that type of labor anymore. Management has a and in that, but the people and the officials they elect have an even bigger impact (regulation vs invisible hand and all that).
    • WalterBright 4 hours ago
      On the other hand, we have government operations that spend staggering amounts of money, and accomplish nothing at all. This one even has lost $8 million and nobody knows what happened to the money:

      https://www.seattletimes.com/seattle-news/politics/does-noth...

    • lolive 8 hours ago
      I came to comment EXACTLY about this issue. Management lives in a world where they have absolutely no expertise on what they are supposed to manage. So they try to objectify their decisions, with generic KPIs based on efficiency or cost or whatever. And miss MANY additional decision axis very focused on WHAT they are supposed to build. That is a MASSIVE issue, in my opinion.
    • ezoe 2 hours ago
      What you describe had been happened already when programming task became using search engines, passing data between libraries, and delegating coding to off-shore workers.
    • stingraycharles 13 hours ago
      Seems to me that - optimistically - this would shift the job of a software engineer into a more formal engineering role, and that the actual implementation is done by AI. In the same way in other areas, engineering and implementation differ and implementation can be (and is) automated.

      No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.

      • ted_dunning 12 hours ago
        That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.

        There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.

        • adrian_b 11 hours ago
          Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.

          This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.

          • SleepyMyroslav 10 hours ago
            I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =)

            I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.

            • atq2119 6 hours ago
              Gamedev dev is a good example actually. True, handwritten assembly has gone out of style. But knowing how caches work, and how to lay out data to improve performance is important. And stuff like vector intrinsics also gets used.
          • don_esteban 9 hours ago
            1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues

            2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it

      • torginus 6 hours ago
        Personally my experience has been that once I manage to describe a problem in good enough detail that a junior engineer would be able to solve it, it's good enough for an LLM as well.

        Which creates incentives I'm not wholly comfortable with, but the fact is that I'm more productive now alone, than I used to be in a team.

        • batshit_beaver 5 hours ago
          My experience is that if I manage to describe a problem in enough detail for a junior or LLM to be able to solve it, it would have been faster to do it myself.

          Prior to LLMs the idea was to involve juniors in the engineering process to give them an opportunity to learn rather than necessarily to improve the team's immediate productivity. Some companies famously (and consciously) refused to hire juniors to avoid the performance hit even prior to genAI (eg Netflix).

          Involving LLMs in our engineering processes has very suspect implications for both productivity and quality of our output, since unlike juniors the LLMs don't even learn.

      • cucumber3732842 8 hours ago
        > this would shift the job of a software engineer into a more formal engineering role

        If only you knew how the civil engineering sausage was made.

        The amount of yolo'ing stuff based on vibes goes up when testing is expensive/impractical. They just paper over it all with disclaimers of the sort that would get laughed at for being non-starters in the software industry.

    • layer8 8 hours ago
      > But documentation is not the same as field experience.

      Even if it were, creating good documentation or assessing its quality requires experience in using good and bad documentation. And how would juniors build up that experience if they are using AI for everything.

    • zelphirkalt 12 hours ago
      And the next level of this is, that even companies that realize this, mostly go ahead acting like this anyway, because they think someone else can train the juniors. Some other company will appear to do that, but nimby! Over time the lack of good judgement will lead to a decline in their products' quality, which will be difficult to recover from.
    • RataNova 4 hours ago
      AI is almost a distraction from the older pattern: management discovers a way to make the spreadsheet look better this quarter and the hidden cost only shows up years later
    • aleqs 4 hours ago
      I think the problem is even more general than that, and has existed since before LLMs. All of the decision makers are incentivized to chase short term gains and ignore everything else. Many tech companies already had huge gaps in knowledge around their own codebases simply because such knowledge and expertise is basically treated as a liability/expense rather than an asset.

      I'm actually very optimistic about LLMs/AI for basically the opposite reason tech leadership/MBAs are - I think it will allow us to overcome organizational/business/marketing ing hurdles that tech companies rely on short-sighted MBA-style 'leadership' for in the first place. And not because I believe in OpenAI and Anthropic - I think the future is self-hosted or community -hosted open models, and open collaboration among willing peers, building open software to solve real problems in honest ways, rather than hierarchical top-down corporate hellholes pumping out pre-enshitified crapware full of ads, tracking and dark patterns.

    • Cthulhu_ 4 hours ago
      There's economic / capitalist pressure to reduce cost / increase revenue and optimize for short-term profits; that's on the corporate side, anyway.

      But applying the military hardware stuff to software is IMO a bit of a leap; I get the similarities, but where demand for software hasn't slowed down at all, demand for military hardware and ammunition just wasn't there.

      The alternative would have been to keep all the factories alive, maintained, staff employed (or training staff ready to onboard rapidly hired staff when capacity had to go up), supplied stockpiled (and rotated), etc. And who would be willing to pay for that?

      In times of peace the voter wouldn't want the government to spend billions on the military if it wasn't necessary... except for the US which still spends billions a year on the military even in peacetime. But not on their production facilities it seems.

    • pelorat 12 hours ago
      In the case of the military I'd say the real reason is political. After the fall of the Berlin wall, Europe collectively agreed (knowingly or not) that war is now a thing of the past and the goal should be the complete dismantling of militaries worldwide, starting with Europe. Lead by example, etc.
      • rini17 11 hours ago
        It's subtler than that. Europe was just constantly reminded by its big brother not to duplicate NATO structures, which are dependent on the US.
        • don_esteban 9 hours ago
          This.

          Plus, of course, each European country has to support their own defense industry, so each one of them needs to have their own howitzer/tank/whatever and they can't agree on common approach that would actually allow for the economy of scale.

      • brabel 11 hours ago
        They agreed that war was a thing of the past, but still continued to push for NATO to allow new members anyway, ironically causing Russia (and China and everyone who is NOT in NATO) to suspect that war was NOT a thing of the past and therefore never quite abandoning their military completely. Unpopular opinion: the West should either NEVER have abandoned its military production (so as to maintain NATO actual preparedness for war, given that's the only reason for its existence) OR it should just have dismantled NATO and announced to the world that it strongly believes war is a thing of the past, and that other countries are advised to follow suit. But we actually chose the easy, halfway path: keep NATO, keep our militaries "looking strong" (which gives the signal our rivals should also do the same, obviously), but not actually be ready for any sort of major war and as the article points out, even lose actual capacity to become ready for war within any realistic timeframe. The worst possible outcome :(.
        • avmich 9 hours ago
          It could be matching theory for outcome though. The unpopular opinion may still be wrong too. Russia was quite different in 1999, or better in 1992, to the point of joining NATO, and China was nowhere the threat of today, and it could be different reasons- not keeping NATO - which caused today's standup. So, basically, the situation seem to be more complex.
        • LtWorf 10 hours ago
          USA had no part in that push?
          • 0xDEAFBEAD 9 hours ago
            NATO expansion was pretty controversial in the US

            https://time.com/archive/6731121/how-clinton-decided-on-nato...

            • LtWorf 2 hours ago
              Clinton was the president of the USA, I think you're proving I was correct. I never stated each and every USA citizen was pushing for it.
          • bluGill 10 hours ago
            Perhaps but the US was pushing NATO to invest more in war for years suggesting they didn't believe war was in the past
            • don_esteban 9 hours ago
              Correction: The US was pushing NATO to invest more in US gear.
              • bluGill 8 hours ago
                That was the hope but nothing stopped the EU from making their own stuff.
                • mitjam 3 hours ago
                  This is not true, since the 1990 the us was strongly opposed on the eu building or relying more on own defense industry and more closely aligned defence policy, even threatening end of NATO. Lookup the "three Ds" articulated later by Secretary of State Madeleine Albright: No Duplication of NATO assets, No Decoupling of European security from the US, and No Discrimination against NATO members who were not in the EU.
                • don_esteban 8 hours ago
                  Nothing, only the gentle hand of US influence over their internal affairs, and the typical nationalistic bickering among themselves.
                  • bluGill 7 hours ago
                    Those are impediments for sure, but they are not blockers, and if they had the will, they would have overcome that. Most, probably all NATO countries do have their own military industrial complex of some sort and the US is buying from them. Although, it certainly is the case that the US is the largest supplier of military equipment and so, yes, the US would benefit most from efforts to increase military spending.
                    • don_esteban 7 hours ago
                      I agree, the will was never there.

                      We will see whether the new realities of the war in Ukraine and Trump's approach to Europe will substantially change that.

                      The whole economic and societal tensions in the west make things tougher...

            • LtWorf 10 hours ago
              That's because they have more to gain from that.
    • adam_patarino 9 hours ago
      Most workforce reductions are using AI as a cover up for greedy short term bonuses.

      Any exec using AI to pay fewer people lacks imagination.

    • cultofmetatron 7 hours ago
      china is run by engineers, America is run by bankers.

      the consequences are significant

    • surgical_fire 10 hours ago
      > But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

      This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.

      Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.

      This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. "Those who care don't know and those who know don't care".

      The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.

      This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.

      This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.

    • reaperducer 5 hours ago
      Modern managers: You have to be in the office for synergy and the serendipitous exchanges with coworkers that lead to innovation.

      Also modern managers: You were in the bathroom for six minutes. I'm docking your pay.

    • dismalaf 3 hours ago
      The real problem is that the west is fractured and sees itself only as individuals, corporations and nations.

      China sees itself as a civilization. Russia does too, ish (look up Dugin's Eurasian Empire).

      The West has a hard time believing anyone wants to destroy it, so doesn't take the threat seriously. Meanwhile other civilizations are working to both destroy the west and ensure their own place in the future.

      So we were OK outsourcing our production and knowledge for a quick buck and it's coming back to haunt us.

      Even now, we still don't see ourselves as a civilization, actively work to undermine ourselves and help our enemies who openly want to destroy us, and are barely doing anything to defend ourselves. We seem to have also given up on the idea of a "democratic world", which was in vogue when I was growing up (Bush 2 years).

      As for the thesis of this article, the positive is that code and knowledge, because of the fact preserving it is basically free, is still there. AI hasn't been good enough to displace it. And our technological advantage is still pretty wide and our military industrial complex is, for better or worse, coming back.

    • gzread 5 hours ago
      I'm not sure if the tweet was a joke, but some companies are apparently hiring junior developers back because it's cheaper than AI.
    • DrBazza 12 hours ago
      > The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

      It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.

      It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.

      This race over the few decades has hollowed out most Western companies.

      Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.

      Capitalism is the worst form of economic system, apart from all the rest.

      • 2ndorderthought 10 hours ago
        I have worked for companies in different countries.

        I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.

        This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.

        People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.

    • api 6 hours ago
      Most of what you describe here is overfitting:

      https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart....

    • gmerc 11 hours ago
    • dominotw 7 hours ago
      > The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

      my promotion packet at work always included how great of a document-er i am

    • jmyeet 5 hours ago
      This is all basic economics.

      Companies can grow organically or through strategy and adding new verticals to a point. Eventually they're too large for that. They own the whole market, they can't get regulatory approval for acquisitions and so on. At this point they only really grow at the same rate the industry or the economy does.

      At this point (or often long before), the only way to increase profits is to raise prices and/or reduce costs. Profits tend to decline over time so there is constant pressure to reduce costs to statisfy the insatiable need for increasing profits.

      This is the real product AI is selling: cutting wages. It's a combination of displacing workers (which, thus far, hasn't been all that successful). Where it is successful is to have the threat of layoffs hanging over your workers, getting them to do extra unpaid work for the same wages and making sure they can't ask for raises.

      That's what's paying for all this AI investment.

      So I agree with you: the real problem isn't AI. It's capitalism.

    • yapyap 11 hours ago
      > The real issue, in my view, is not AI itself

      in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though

      • 2ndorderthought 9 hours ago
        Only in 2026 is AI the answer to everything and when the negative traits of our behaviour are amplified due to AI it clearly has nothing to do with AI even when the article exactly about that.
    • palmotea 13 hours ago
      > The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

      I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.

      The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.

      • AnthonyMouse 12 hours ago
        The US is stuck in this weird irony where they recognize that Soviet-style central planning is a disaster but can't recognize that it's what megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

        In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.

        But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.

        • torginus 6 hours ago
          I wonder if it would work if top US companies implemented a system like the NFL draft, where companies competing for top engineers out of college get to pick from the best engineers inversely proportionally based on how they did before financially.

          While it sounds counter intuitive, it maintains a good distribution of talent across the industry.

          But that system would only work if healthy competition was the goal, not moneymaking.

          • AnthonyMouse 41 minutes ago
            The thing that sustains these companies isn't having the best engineers.

            Suppose someone new made a better mobile OS than Android. What would happen?

            Google has convinced a lot of third party apps to use their anti-competitive attestation system that has no real security value but makes it so those apps won't run on a competing operating system even if it implements all of the same APIs, so it immediately has a major barrier to gaining traction. That should be an antitrust violation.

            Then the incumbents would copy any innovative features the new OS has so it no longer has an advantage. On paper this is what the patent system is supposed to prevent, but in practice is does the opposite. If you as a practicing entity tried to sue a major incumbent for patent infringement, they would counter-sue and have an arsenal of thousands of patents that could keep you bogged down in litigation for years. Just the cost of litigation could bankrupt a smaller company even if they won, and there is enough ambiguity in the system that they're pretty likely to lose when the incumbent only has to find one patent out of thousands they were unintentionally infringing, or a court is willing to enforce one that ought not to have been granted. So then everyone has to file a bunch of patents for the purposes of mutually-assured destruction and can't really enforce them, which favors larger companies that can afford the overhead. It certainly doesn't protect the little guy and we would be better off without them.

            Your suggestion is also essentially unworkable. NFL teams all have the same number of players and play the same game. Amazon and TSMC each have a very different business and largely aren't competing for the same people. Does Garmin get the same number of picks as Foxconn even though they don't employ nearly as many people? How many picks does a startup get? Do we count Apple as doing poorly because the gross margins on hardware are lower than they are on pure licensing, or Nvidia as doing poorly because they have higher margins but lower revenue? How are we accounting for engineers in other countries? What about engineers in China who work for companies in China who are contractors for international companies? What if a corporate contractor has more than one corporate customer? How do we account for open source? How do we distinguish engineers from IT staff or research mathematicians or prevent companies from giving willing workers a job description that doesn't match the work they're doing?

            It's also probably worth pointing out that systems like that are essentially price fixing schemes by the industry to pay workers less than they would otherwise be able to get if other companies weren't prohibited from outbidding them for talent.

        • NordStreamYacht 12 hours ago
          > megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

          You just described Lucent.

          • AnthonyMouse 11 hours ago
            That's the end stage. The bigger problem is the companies rotting from the inside even though they're still alive, because they use their resources to suppress your alternatives to them while they're slowly dying on top of you.
        • TheOtherHobbes 11 hours ago
          Yes - ultimately it's the same system. Far from being daring and innovatory, it's backward-looking, unimaginative, and bureaucratic.

          Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.

          The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.

          What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.

        • adrian_b 11 hours ago
          Yes, many Americans and other Westerners believe that the so-called "socialist" economies, like those of the Soviet Union and of Eastern Europe were non-capitalist.

          This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.

          In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.

          Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.

          • AnthonyMouse 11 hours ago
            Everyone wants to tag the evil with their opposition's name. The evil is concentration of power. But no one wants to call it that because then they can't pretend that it's something different when they're doing it themselves.

            Witness the people who keep proposing to solve market consolidation with higher taxes. Higher taxes go to the government, and therefore the interests that have captured the government. Are we going to solve it by taking money from Warren Buffet and giving it to Larry Ellison? Do we benefit from increased funding for Palantir? No, you have to break up the consolidated markets through some combination of antitrust enforcement and peeling back the regulatory capture that prevents new competitors from entering the market.

            • anon7725 9 hours ago
              > Higher taxes go to the government, and therefore the interests that have captured the government.

              There is at least a chance for it to be redistributed, unlike private wealth.

              • AnthonyMouse 3 hours ago
                Let's have a quick look at the federal budget. The big ticket items are social security, medicare, net interest and military/VA. Together those are more than half the budget.

                Social security is the biggest of them. Older people have more wealth than younger people on net and social security is structured to make higher payments to people who made more money when they were younger, which is significantly correlated with having more wealth right now. So it's a massive transfer payment system that transfers money from the poor to the rich. Meanwhile it uses its own special tax which is significantly more regressive than the ordinary income tax and doesn't tax corporate income at all. Notice in particular that we could instead be solving "grandma doesn't starve" with a UBI that makes uniform payments to everyone and not disproportionate payments to the rich, and comes from a tax which is also paid by corporations.

                Net interest is a naked transfer to people with enough capital to invest in government bonds.

                Most of the military and VA budgets go to government contractors who work hard to sustain an uncompetitive bidding process with thick margins.

                Medicare uses the same bad tax as social security and those dollars go to the healthcare industry which has thoroughly captured the government. The AMA lobbies to limit the number of medical residency slots and sustain a doctor shortage and healthcare corporations have established a thicket of laws to limit competition, impair price transparency and promote over-consumption.

                That's where the majority of the government budget goes, and the remaining minority of the money is also going in significant part to government contractors and regulatory capture industries. The government takes tax money from the middle class and gives it to the rich and huge corporations.

                We don't need any more "redistribution" like that. If you think you can get the government to stop doing that and instead give the money the poor and middle class then first prove you can do it with the existing money before even thinking about collecting more. You have a nutrient deficiency because you're infested with tapeworms, not because you don't have enough food.

            • esseph 9 hours ago
              I'd argue we need both massive antitrust, and higher taxes on the wealthy to prevent them from amassing the power to prevent the antitrust.
              • don_esteban 9 hours ago
                And change in laws regarding the legalized corruption (Citizens United, ...). And fight for real freedom of speech.

                This is very complex problem that needs to be tackled from all sides simultaneously, the entrenched interests are already well setup to defend themselves.

                • AnthonyMouse 2 hours ago
                  Citizens United was a pretty pro-speech decision and is unfairly maligned, and "money is speech" predates it by quite a few years. The real problem is when huge corporations control the flow of information.

                  Which is a bigger problem, that corporations can pay for political ads, or that one corporation has 90% search market share? That there are political ads on Facebook or Twitter, or that those corporations control what's in the feed of hundreds of millions of people because use of their algorithm is tied to the network effect instead of having a federated system like RSS or email?

              • AnthonyMouse 3 hours ago
                The things amassing power to prevent antitrust are corporations, not individuals. It does nothing to make Bill Gates sell shares in Microsoft to pay taxes when the corporation stays the same size. If anything it makes it worse because then more corporations are controlled by Wall St rather than founders and they're significantly more inclined to turn the screws to juice short-term profits.
              • atq2119 5 hours ago
                Plus a systematic way of keeping the Gini coefficient of wealth small in a sustainable way. I'm a fan of establishing sovereign wealth funds whose dividends are paid out equally per capita for this purpose.
                • AnthonyMouse 2 hours ago
                  A sovereign wealth fund has the government deciding what to invest in, which is both a magnet for corruption and a good way to get below-market returns through mismanagement. It also requires an extremely oppressive build-up period where the government is collecting money in taxes to seed the fund instead of providing services to the population, which is why the countries that have one are basically all countries that net export huge amounts of oil, and China which exports everything else.

                  Meanwhile you don't need the government to use tax dollars to buy stocks in specific companies. If you want a UBI then use VAT. Then it comes from every company instead of having government bureaucrats choose which ones, and gets paid out immediately instead of needing a generation of build-up.

          • dzonga 6 hours ago
            wow!! straight to the dome.

            thought about this too - but not as expressively as you put it.

            e.g in China - for early stage ventures - there's cut throat competition - then as Thiel would put it with heavy competition profits trend towards 0 - by then the tech is perfected or close to perfect - then the state uses its funds to back a monopoly. that's how you get a BYD.

          • samiv 10 hours ago
            And to complete the reversal what is now referred to as the "golden age of capitalism" i.e the post WW2 USA was actually very socialist. Strong social movement and unions and social spending that created a wealth working/middle class with a bunch of spending power.

            Inequality society producea inequal economy (and vice versa) which is the economy of any developing country. Few rich,. miniscule middle class and lots of poor people in slums snd poverty.

      • fxtentacle 12 hours ago
        West: We need profits and then we’ll try to build something useful.

        China: We need to build this useful thing and then later let’s try to make profits, too.

      • andy_ppp 9 hours ago
        What do you think the war in the gulf is about, the US cannot compete with China so they are destroying the global system that enabled them. There is no plan to have a peace with Iran, only perpetual war and the destruction of the middle east, starvation in East Asia and poverty and nationalist wars in Europe, potentially with Russia taking over vast swathes of Eastern Europe again. Suddenly Russia is the one in charge of the China-Russia relationship. It's such a stupid plan for the US that you might think it was designed by Putin himself.
        • don_esteban 8 hours ago
          You started well, but then the train got derailed...

          Russia has no need for Eastern Europe (they have enough land and resources, why saddle yourself with hostile population?), as long as the said Easter Europe is not threatening them with NATO bases/missiles (US has repeatedly shown that they do not hesitate to use their muscle if they think they can get away with it, so Russia's paranoia is not entirely unfounded).

          Even if Russia somehow took over Eastern Europe (most likely way: they learn from US how to do soft 'regime change'), they have no chance against China (China is just so much bigger and better organized; the population's mentality also matters a lot). China and Russia are rather complementary, there is not reason for confrontation between them.

          But you are correct, what US is doing is really totally stupid ... although it seems designed by Netanyahu, not Putin.

          • Supermancho 4 hours ago
            > Russia has no need for Eastern Europe

            They sure do like to sell to them.

            > Easter Europe is not threatening them with NATO bases/missiles

            This never made much sense. Attacking a NATO buffer pre-emptively, brining your forces out and closer to existing NATO weapons, is basically putting you in the same situation with less resources. The issue is not about weapons "threatening". ICBMs can reach anywhere and smaller munitions from local seaboards (subs). This idea that NATO is somehow threatening by proximity is not credible. The answer to it would not be to rush headlong into a conflict to bring those forces to bear and bring your border to theirs anyway.

            It looks more like the Ukraine conflict has been about securing resources, testing capabilities, and demographics (tied to capabilities). Russia wanted more resources to sell to partners and wanted to test the (declining) capability of it's own forces.

            • don_esteban 3 hours ago
              You are applying western thinking (acquiring captive markets, NATO is a force of good, surely not threatening) to Russia. Big fail, they think differently.

              It is obviously clear that Ukraine is not about securing resources: Given the costs of war (Russia knew the sanctions will be coming, just did not think their funds will be frozen), the cost-benefit is simply not there. Given the obvious economic drawbacks of attacking Ukraine, the only explanation that makes sense is the national security one. You go to war to 'test capabilities' only if it is a minor thing without serious consequences, which Ukraine war definitively does not fit.

          • andy_ppp 8 hours ago
            If China cannot get oil from the middle east what happens to China and China-Russia relations? I didn't say there would be hostilities just Russia would become potentially the more dominant partner.

            If NATO expansion is the reason for the war in Ukraine (not imperialism) then why has the war not stopped now we know Ukraine will never join NATO?

            • don_esteban 8 hours ago
              1) Russia will happily supply China with oil and other resources, and China will pay by industrial good and all other stuff they produce. China is working really hard on getting rid of dependence on foreign energy sources, any leverage Russia might get if it became the sole supplier of oil/gas to China is very temporary and Russia knows it. Furthermore, unlike USA, it has no delusion of ever dominating China - China already has them by the balls.

              2) mostly face saving, but also: Ukraine will remain openly hostile, NATO or not, planning to have hostile (EU) forces on its territory as 'security guarantor'. Russians still believe Ukraine will collapse (those men will eventually run out/economy will collapse/EU will not send its children to die on the eastern front) and they will be able to have a friendly (or at least truly neutral) government there. Russia's paranoia about the west is really strong, well founded and well documented.

              • andy_ppp 7 hours ago
                You seem to be extremely fond of Russian propaganda.
                • don_esteban 7 hours ago
                  That's the easy way out, isn't it? Why argue on merit of anything you don't like, just name it Russian propaganda.

                  Or, perchance, you want to provide a concrete argument why are my statements incorrect? (No, 'it fits Russian narrative' is not argument about correctness, it is an argument about the narrative.)

                  • andy_ppp 6 hours ago
                    I think this is the wrong place to debate politics tbh, better luck next time.
              • mopsi 7 hours ago

                  > Russia's paranoia about the west is really strong, well founded and well documented.
                
                It's an act, and everyone in Russia knows that it's an act. Acting this way gets the dumber kind of Western politicians to carefully tiptoe around Russia; that is the value this act provides.
                • don_esteban 6 hours ago
                  There are many western authoritative sources documenting that.

                  Have a look at William Burn's 'Nyet means nyet' depeshe. Or Merkel's memoirs. Or George Kennan's statement's in the 90's on the wisdom of expanding NATO.

                  But, ultimately, one believes what he/she wants to believe....

                  Do you think it is better to not carefully tiptoe around Russia? Do you consider full-on sanctions, total refusal (except Trump) to diplomatically engage them, open intelligence, military and financial support of Ukraine 'carefully tiptoing'?What do you propose instead? Open WW3? I am really curious.

                  • mopsi 5 hours ago
                    You listed joke sources. Merkel, in particular, has been utterly discredited for her naivety toward Russia. Her sucking on Russian gas left Germany lagging in the transition to renewables and EVs, and the German economy is now paying a double price by also having to bear a part of the economic burden of the war.

                    As to Russia, virtually no-one in Russian academic foreign policy circles, nor in the influential semi-formal circles of imperialists and neo-nazis, nor anywhere inbetween, is paranoid about the US, NATO and the West in general. What is there to be paranoid about? They see the West in general as utterly impotent, making big words, but not backing it up with a stick. This week one year ago, Trump wrote "Vladimir, STOP!" in response to a massive air attack on Kyiv. Putin didn't, and what followed? A bunch of nothing.

                    The answer to your question about tiptoeing is abundantly clear to anyone familiar enough with Russian culture to know what zek and kagebeshnik mean and how to deal with them. Politely asking them to stop has never worked. The idea that you have to talk with people in the language they understand is hardly a novel one.

                    • don_esteban 4 hours ago
                      Sigh, joke sources. Burns and Kennan also, right? Anybody who actually understood Russians is a joke. Study a bit, and not only neocon think tank sources, but from the people who actually understood Russians (there are practically none left in recent administrations).

                      Russians are paranoid, among other things, about nuclear decapitation strikes. For the same reasons, they have repeatedly explicitly strongly opposed missile sites in Poland and Romania.

                      I am really curious, what do you think the west should have done? Bomb Russians directly? I mean, what else is left?

    • vrganj 9 hours ago
      The problem, in other words, is quarterly earnings in specific and shareholder capitalism in general.
    • aidenn0 2 hours ago
      [dead]
    • brrraaah 13 hours ago
      [flagged]
      • stingraycharles 13 hours ago
        You can really reduce almost any problem to a “it’s a problem because of people”, so that adds very little to a discussion.
        • brrraaah 13 hours ago
          [flagged]
          • stingraycharles 12 hours ago
            A claim that fits every possible observation equally well isn’t an explanation. What does it help you predict, when everything falls under that label? How does it help you predict behavior of different institutions?
            • brrraaah 12 hours ago
              Given a sufficiently long timeline all predictions breakdown (see prior comment on attenuation and entropy and Nostradamus)

              And on shorter timescales you aren't really predicting anything of consequence. You're just assuring all that effort trying to predict Apple's next move (for example) keeps Apple itself alive in the public debate whether they do the thing or not; they'll have missteps but our 24/7 fetishizing of what they'll do next, overall, just distracts us from our own lives and boosting the lives of the mega rich

              You really don't seem to have a grasp of how gamified and propagandized you are

              • jrflowers 12 hours ago
                > just distracts us from our own lives and boosting the lives of the mega rich

                So you’re saying we are being distracted from boosting the lives of the mega rich, which we should get back to doing

              • stingraycharles 12 hours ago
                [dead]
          • teiferer 12 hours ago
            If you find discussion forums pointless then why are you participating in one?
    • Aaargh20318 11 hours ago
      > What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

      And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.

      I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it does do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).

      I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.

      • only-one1701 10 hours ago
        The time to solve that resource problem is before AI concentrates power, not after. It’s LESS likely to happen when a tiny elite increases their already huge amount of power.
      • esseph 9 hours ago
        Jobless people normally can't feed themselves in a modern world.

        And uh, healthcare. Among other things.

        • Aaargh20318 3 hours ago
          Exactly, so that problem needs to be solved. Labour and income should not be linked.
          • esseph 1 hour ago
            I think you could go at least another 10,000 years and that still wouldn't be "solved".
    • sgt 11 hours ago
      You sound convincing, but it also reads very AI generated. A lot of people will stop reading half way.
      • christofosho 2 hours ago
        ... Yeah, that was my issue as well. I'm pretty sure it _is_ AI generated, and flagged it. Hopefully moderation can follow up.
        • sgt 2 hours ago
          The worst part is that several of the replies are probably AI generated too. So AI feeds AI, essentially. Is this going end forums like HN? I hope not.
    • HeavyStorm 9 hours ago
      You're absolutely right. And the root cause is simple: the stock market / shareholders. The incentive is for quarterly returns, not long term. That's why CEOs look for that - that's the job they are assigned by shareholders and the board. For a shareholder what matter is the stock going up. Heck, you can make money even if it goes down, but you can't if it stands still.
      • lnsru 9 hours ago
        No. It’s pure greed dominating the world. My employer is owned by bigger private company and the shitshow is the same as in big megacorporation. There are hordes of colleagues to stab one for 100€ more salary a month. Disgusting.

        The company is manufacturing special computers. The initial owner/founder ordered CPU modules and memory cards always looking at the price break. His question was always „how many to buy to get best price?“. So he ordered sometimes 200-300 parts more than needed immediately. Then the follow up order came and he emptied the storage. Now new manager always orders EXACT amount memory cards as ordered computers. Price is secondary thing, most important thing to work without warehouse and get things delivered just in time. What doesn’t work at all for the while already. The high prices buying small quantities is eating up the profit, so people are getting fired to save costs. It is pure greed dominating western world. Everything is done to look accounting nicely at every cost, get whole bonus despite ruining the company long term. I see this pattern recently very often.

  • liendolucas 11 hours ago
    I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.

    My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).

    I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

    Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.

    Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.

    • andai 4 hours ago
      Recently, after a month of heavily AI assisted programming, I spent a few days programming the good old fashioned way.

      I spent most of the time confused and frustrated, straining painfully against the problem. I spent most of my 7 hour session this way, and the task was successfully completed.

      But I was startled by the difficulty. I began to worry that I had given myself some kind of brainrot from disuse. Then I remembered, my goodness, it always felt that way, if I was ever doing something new. That's just what it feels like, grappling with a problem you haven't seen before.

      It was always as hard as that, I was just no longer used to the feeling. You get used to the difficulty, and then it feels normal.

      Or indeed: you get used to its absence, and then it suddenly feels overwhelming and "wrong" !

      I think maintaining the capacity to tolerate difficulty and discomfort is a "muscle" well worth preserving.

      • MengerSponge 3 hours ago
        I'm biased, but I think you're on to something. People are writing about this under the broad framing of "cognitive surrender"
    • afarah1 5 hours ago
      I've had the "problem" of forgetting syntax before any AI, with IDE autocomplete. It was only ever a problem when switching jobs and being expected to write syntactically correct code on platforms without syntax checks or autocomplete. So I did some exercises on such platforms in preparation for interviews.

      In the real world, reliance on syntax autocomplete and checks was never an issue. The important thing has always been understanding the core concepts of the language and the runtime, e.g. how the event loop works with Node.js and how to write asynchronous and event driven programs.

    • coffeefirst 26 minutes ago
      I really don't understand the people who use it for everything.

      It's become my first stop for search because it's doing it in bulk—read 50 results and lead me to something useful.

      But I just got Claude MCP connected to my personal email/calendar/etc and I can't figure out what to do with it. It wrote a summary of my inbox that took as long to read as flipping through my inbox. And since it makes no sense to delegate decision making, I'm not sure what the actual work I'm supposed to give it would be.

    • whywhywhywhy 7 hours ago
      I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

      I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.

      • cableshaft 7 hours ago
        Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.

        But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.

        It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.

        A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.

      • tim-projects 7 hours ago
        I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.
        • app134 6 hours ago
          DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.

          I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.

      • confiq 7 hours ago
        why I agree with both of you?
      • Forgeties79 6 hours ago
        >I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

        This feels unfair to the people dealing with your (LLM’s) code. You don’t vet it at all? Or am I reading this wrong?

        • WhatsTheBigIdea 6 hours ago
          What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.

          The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.

          It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"

          The profit motive simply does not drive society in this direction.

          The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.

          Avoiding crisis over the longer term is simply not investable.

          "Fair" is not a relevant or useful conception in this context.

          • Forgeties79 5 hours ago
            > What does "fair" have to do with anything?

            Not wasting other people’s time when they expect your work to at least pass a cursory check. It’s selfish and disrespectful. It reflects poorly on you. I don’t know about all that other stuff you wrote but it’s not really what I’m talking about so I’ll clarify.

            I don’t know what your high school/college was like, but we used to trade papers for editing. It was universally considered bad practice to send rough/first drafts. It’s disrespectful and wastes the time of people who are being generous with it for you. You’re offloading your work in a selfish way.

            Simply put: If I want an LLM’s raw results, I’ll prompt it myself. Why are you involved if I don’t want your work? Your expertise? Want to use an LLM then go for it but don’t just wipe its muddy boots on my work. At least look at the results.

            Unfortunately, this is becoming even more common with LLM’s. I have no problem confronting people about it because 100% of the time they don’t want it done to them. It’s not even an argument, it’s catching them being selfish and they know it.

            • cramsession 3 hours ago
              Are the people paying your paycheck being fair to you? Are the executives of your company paid orders of magnitude more than you are? Fairness starts from there. Your job is to be as unexploited as possible. I hope my coworkers also have this goal.
              • Forgeties79 3 hours ago
                What does my relationship with the c-suite/my work have to do with a colleague dumping their unedited chatgpt crap on to me? I legitimately do not understand what point you’re trying to make. There seems to be a lot of assumptions here and I’m not sure what they are.

                Sending your unedited LLM outputs to me is not sticking it to the execs. If you really want to play that game, you go ahead and ship that or hand it to someone who deals with the final output. That’s your prerogative and you can face the consequences. I am not here to clean up your AI slop. That’s not my job. At that point you are the problem, not the c-suite.

                All I hear from AI evangelists is “it’s a tool! It’s not the problem! It’s people using it wrong!” Ok, then the people using it are the problem if something is wrong. So if you act this way, which is clearly not a productive use of the tool, you are the problem.

                Edit: let me just ask you a somewhat multi-faceted question. If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied? Is that what you wanted me to do? Is there a reason you asked me to do it instead of prompting ChatGPT yourself?

                What if I did this every time I had to write anything? Every email. Every summary. Every report. Just prompt, copy, paste, send to you.

                • cramsession 1 hour ago
                  > If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied?

                  Yes. Again my job is to stay unexploited. Saying yes is the easiest option. I'll leave the worrying to the people making an order of magnitude more money than me.

                  • Forgeties79 1 hour ago
                    It seems you are either very unhappy at your job or just anti-work, that’s fine you do you/sorry if your work sucks, but there is a huge gradient between “completely not caring and doing the bare minimum to collect a paycheck” and “sacrificing everything for a company that does not care about me.” Many of us fall in that gradient. We do decent work and clock out when we’re done.

                    If you want to phone it in or act your wage or whatever go ahead but don’t make it my problem. You’re not sticking it to your employer. You’re actively making your workplace worse for everyone else. Your decisions impact others.

                    This is like working in the service industry and simply not doing your job. Management doesn’t suffer and they’ll just fire you. The people you work with have to do your job for you. What have you actually accomplished?

                    • cramsession 1 hour ago
                      First of all, I don't agree with your implication that AI produced code is bad. It's as good as the developer prompting it in my experience. Secondly, yes I'm anti-work. Capitalism does not allow for what you are desiring. Capitalism is configured such that capital is seeking maximum return for minimal costs (my pay). I am incentivized to do the opposite. Wealth inequality is a multiplier on how hard I'm going to try to achieve my goal.
                      • Forgeties79 1 hour ago
                        > First of all, I don't agree with your implication that AI produced code is bad.

                        Never said that. I said generating code with an LLM then not looking at it at all and pushing it (which is what started this whole comment thread) is a selfish and lazy decision.

                        Not everyone prescribes to a strict anti-work stance. Most people don’t in fact. So we’re at quite an impasse and it doesn’t change the fact that your decisions become your colleagues’ problems and does nothing to deconstruct/fight capitalism. I feel sorry for anyone who works with you if this is not an internet routine and reflects how you actually operate.

                        • cramsession 1 hour ago
                          And I don't agree with that. Who cares how the code got generated? Like I said most code that's written by AI is better than code written by humans.
            • coffeefirst 3 hours ago
              And this is the AI ethos: anti-contentiousness.

              Is it correct? Is it any good? Should I subject another person to this? Is it profoundly rude to not even read their email and just have a robot respond automatically?

              The slopmonger does not engage with the question at all, because they never cared.

              • Forgeties79 2 hours ago
                They also lack any imagination because clearly they didn’t think about how they would feel if they were on the receiving end.
        • leptons 2 hours ago
          What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?

          We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.

          And I don't like it one bit.

          • Forgeties79 2 hours ago
            Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces me to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.

            I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.

            What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.

        • drzaiusx11 6 hours ago
          Another tragedy of the commons tbh
    • sahilagarwal 10 hours ago
      I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.

      Helps me keep sane tbh. And keeps the edge sharp.

      • wreath 6 hours ago
        At work we are literally forced to use AI and it’s part of our performance review. Even though I really like coding by hand, I have to now use AI so I can keep my job. I will try this out though, 2 days per week using AI and the rest handcoding, enough to stave off the inevitable lay off perhaps.
        • irishcoffee 5 hours ago
          Surely it can’t be hard to token max at work the same fucking way people have games Jira metrics for years and years.

          If I’m ever in that position (everything I work on it air-gapped, it’ll never happen) I would make it a priority to figure out how to game that bullshit metric so I could get on with solving actual problems.

          I imagine a lot of people do this. Metric becomes a target, etc.

          • Izkata 1 hour ago
            I recommend trying "make this code more enterprise", it seems to pull in the joke and start overengineering everything and giving them absurd names.
    • black3r 1 hour ago
      > Knowledge comes basically from manual trial/error almost daily.

      This is the important statement, although I'd swap the word "knowledge" for "experience" here. You can gain "knowledge" from books, but only trial & error will give you experience to know "which" knowledge to use in which situations.

      And what's important about this in the context of working with AI is the "error" part.

      You have to experience errors to become truly experienced. And part of the experience is to recognize when you're about to make an error - to avoid it.

      AI-driven processes mess up our natural trial & error learning curve in multiple ways:

      - the AI push forces us to ship features faster (cause if we don't, our competitors will), reviews are sloppier, we discover errors later on, the feedback loop gets longer...

      - using AI to debug and fix errors means we spend less time understanding what the error was about, which means we learn less about how to avoid the error in the first place...

      - AI itself sounds overly confident, so reading its outputs without previous experience you may be less likely to recognize when it's making an error, which makes it harder for you to recognize when you're making an error trusting it...

      On the other hand, this last point I tried to make is also why I don't think avoiding AI completely is a good strategy. Whether we like it or not, AI is becoming a part of developer's workflow. And as such, we also need to learn the trial & error process of using AI - what makes AI make errors and how to prompt it to avoid that.

    • dd8601fn 6 hours ago
      I have always had a problem, worse than most I think, where if I’m away from a language for a bit I lose my ability to write it quickly and competently, real quick.

      It doesn’t matter if I was quite competent in it… the mechanical bits fade fast.

      Doing llm assisted work is going to be like pouring bleach on my brain. I can feel it. The more I use it the worse it will be for me.

      I can still formulate what I need, and problem solve just fine, but all the nuts and bolts evaporate.

      • RataNova 4 hours ago
        There is a difference between understanding the shape of a solution and keeping the "finger memory" of a language alive
    • RataNova 4 hours ago
      The danger is not using a tool. We all use tools... The danger is skipping the part where your own brain builds a model of the problem
    • Forgeties79 6 hours ago
      To add to this issue, a lot of people then offload their mental load and work to the people downstream of their LLM results.

      Someone on HN put it well the other day: everyone wants to deliver AI results, no one wants to receive them.

    • cpursley 9 hours ago
      Another perspective: AI reduces brain effort in some domains which actually frees up brain juice that can be applied elsewhere.
      • black3r 3 hours ago
        For me AI mostly reduces time effort. AI types code faster than I do, looks up stuff on the internet faster than I do, debugs faster than I do, but doing those never required much "brain effort" from me.

        What does require "brain effort" from me is making educated decisions. Mostly during planning to figure out which pros/cons of each possible approach are actually relevant for our situation - AI does this poorly, makes lots of wrong assumptions if you don't steer it correctly, and noticing these + correcting AI on them requires "brain effort" too. Then the part of code review where you think about what can go wrong. AI still sucks at figuring out edge cases. It doesn't "know" the entire codebase like I do, its context only has "the parts of codebase deemed relevant".

        Before AI I could jump from 30 minutes of hard thinking into an hour of coding during which my brain essentially rested, before returning to hard thinking again. Nowadays those hour-long coding sessions turn into 5-10 minutes of watching AI do something.

        So for me using AI doesn't "free up brain juice", it instead makes me use my "brain effort" more, and in a workplace environment gives me less time to rest and makes me more tired, cause nowadays bosses expect us to work faster + colleagues working faster means more review requests.

      • wolvesechoes 9 hours ago
        Show us effects.

        What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?

        • tim-projects 7 hours ago
          I'll bite. I've been writing music for decades but I can't sing. With ai I can write lyrics and generate ai vocals, then separate the stems and extract the vocals throwing away the rest. Add the vocals to my daw and create the rest the way I want. Saying its a great work of art is subjective, but for me I can make music I couldn't before now.
          • wolvesechoes 55 minutes ago
            You didn't bite anything.

            Parent suggests the perspective where using AI allows to free up the "brain juice", and utilize it elsewhere. What you describe is AI allowing you to mitigate some limitations that prevented you trying something. So not the same.

          • justonceokay 7 hours ago
            Sidebar: learn to sing. Singing well and “finding your voice” are in my mind equivalent. Every time I become a more confident person I get better at singing. Every time my singing gets better through practice I feel more confident. “Speak with your chest” didn’t make sense until a few years ago. Now it’s obvious to me when someone is incapable of it.
        • owebmaster 9 hours ago
          Exactly. What service got better and/or cheaper?
          • kristofferR 6 hours ago
            Most software, other than Windows and macOS, seems to have gotten better more quickly lately.

            Hard to quantify though, as most, other than the AI companies don't usually advertise their AI usage.

            • dieortin 5 hours ago
              Could you give some examples?
              • zer0tonin 29 minutes ago
                On my side, I can give many examples of random software that became significantly worse since the AI trend started.

                Trainline is practically unusable for purchasing itineraries that go accross multiple european countries. GitHub Actions now contains a bunch of extremely frustrating random bugs. Grammarly somehow gives worse copy recommendations.

              • wolvesechoes 54 minutes ago
                Of course not. If they could, they would already.
        • orthecreedence 2 hours ago
          I've got one. I'm working on a cryptographic identity system in rust. One of the stricter iterations of it demanded creating a public version and private version of each type. The best way to accomplish this is a procedural macro. I don't know if you've written proc macros by hand in rust. I have, years ago, and it was somewhat torturous. I didn't want to relearn to do it all over again and spend what would have taken weeks (this is a side project) to gain a skill I will easily forget in a month or so. So I had an LLM code it for me. This is a really great use for it: it's not building any strong logic or doing any IO, it's simply writing code that generates other code, and is entirely verifiable and testable. It built it for me so I could spend those weeks working on higher level logic and p2p syncing protocol stuff that actually matters for the project.

          I want to make it clear that I'm an LLM luddite. I mostly find the things distasteful and obnoxious. But there are definitely use-cases where they can do what's essentially bitch work and save a lot of time that would otherwise be a waste. It's a tool that can be used for specific things. I don't use them for everything.

          • wolvesechoes 52 minutes ago
            Did it became noticeably better because you used LLM to make a proc macro, therefore freed up you creative and cognitive powers to deliver something much better than you would by writing this macro yourself?
        • cpursley 8 hours ago
          So this is the classic tension between the "coding for the love of code" vs the "coding to solve problems" mindset. This cultural concept has been around since before AI was on the scene, heck well before software existed (craftsman vs builder).
          • dbalatero 8 hours ago
            I'm curious why this is a vs and you have to pick both? I've found coding for the love of code always helped me accelerate my speed and ability so that I could also deliver solidly on time and solve the problems too.
      • whompyjaw 9 hours ago
        Ya this has been my sentiment. If i need to one-off a quick script that does some processing on data, it’s nice to offload that so i can focus on pieces of my code that are more important and interesting to me. The context switching cost is still there tho…
    • ReptileMan 10 hours ago
      >I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

      I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.

      • mpweiher 8 hours ago
        I think it's been well-documented that people feel more productive with GenAI, even when actual productivity declines.
      • LtWorf 10 hours ago
        You probably overestimate yourself.
        • mewpmewp2 10 hours ago
          I relate to the idea of having a different level of thinking now with AI. How would you evaluate that someone is overestimating themselves?

          As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.

          I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.

          • fluoridation 3 hours ago
            It sounds like you're optimizing for a system of self-deception. If you never check how the data is collated, but rather whether the collation appears consistent, you will eventually be left only with data that has the appearance of consistency, regardless of how correct it is.
            • mewpmewp2 1 hour ago
              Why would you think that? I said I cross check and validate to my confidence.
              • fluoridation 55 minutes ago
                Yeah, I got it. That's what consistency means. But appearing consistent isn't the same as being correct. You can't check the latter without an exhaustive check on the data, but doing that kind of defeats the purpose of off-loading the query to an AI.
    • mewpmewp2 10 hours ago
      I hear this a lot, but also I'm curious. How can you really forget coding?

      It doesn't seem to me a thing that I could suddenly forget?

      Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.

      My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.

      • vjsrinivas 10 hours ago
        If you are not practicing an activity consistently, you'll forget some of the finer grained aspects. When I'm coding, I subconsciously create a continuous logic map. Having someone or something just generate (and generate so quickly) destroys that and makes it easier for bugs to slip through.
        • mewpmewp2 9 hours ago
          I mean if e.g. AI stopped existing all of sudden, it doesn't mean you would have forgot how to code and couldn't all of sudden anymore, right?

          You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.

          As in how could I possibly forget what loops, conditionals or functions are?

          I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.

          • Jtarii 9 hours ago
            A writer will never forget what adjectives, verbs, and nouns are. But if they use LLMs to write for them for years they will be worse at writing on their own.
            • mewpmewp2 9 hours ago
              Well, what I'm trying to say here is that coding is conveying logic, the way you'd evaluate it is how fit it is for its purpose, and if it's long term code, how well it will scale into future.

              Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.

              You could simply become a better writer without not writing anything by just observing.

              If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.

              With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.

              • esseph 9 hours ago
                > You could simply become a better writer without not writing anything by just observing.

                To become a better X to must do more of X. There are few shortcuts worthwhile.

                • mewpmewp2 9 hours ago
                  I would disagree. If you only do X, in fact I think you will miss a lot of things that could make you better. You can become better writer by reading other great writings, if you only write yourself, you will not have the full big picture on what is possible. Then you can become better by thinking a lot, imagining a lot, etc... Same with most fields I would argue.

                  Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.

                  • ccortes 6 hours ago
                    There’s a huge difference between “doing more of X” and “doing only X”

                    For someone arguing that “coding is conveying logic” seems like you need a refresher on propositional logic

            • mewpmewp2 9 hours ago
              Are we talking about observational ability, creativity, accuracy of communication or grammar here?

              There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.

              • Jtarii 1 hour ago
                Sure, you may argue that you are becomming a better editor or project manager but your skill in the craft of programming is decaying if you are not actively typing lines of code into a computer.

                You are trading one skill set for another.

          • skydhash 9 hours ago
            Coding is a thinking avtivity. What you’ll be missing is the nimbleness in doing that activity, not the knowledge.

            So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.

            • mewpmewp2 9 hours ago
              I feel like I have to disagree here. I don't practice e.g. multiplication or doing math in my head everyday or for years really, but I feel like I'm just as fast at it as I ever was. In fact whenever I have tried things like Lumosity or brain benching games, that I used to do when I was younger, I'm actually faster than when I was younger, despite not having practiced it at all. I feel like all the real world side practice has helped me improve these abilities indirectly, they have all added to my brain's ability to notice novel patterns, see things from different perspectives, apply new intuitive strategies, that I might have not noticed because I was tunnel visioning when I was younger.

              There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.

              • black3r 3 hours ago
                Multiplication is elementary school math which doesn't require any thinking and the learned approach is simple. You can't really compare the simple stuff that's taught to kids, like basic multiplication or riding a bike with stuff that requires domain-specific knowledge and experience.

                Think more stuff like "find the angle of lines defined by (x-4y-1=0) and (x-y-2=0)", "write the number 2026 in base 7", "solve an equation sin^2(x) - sin(x) = 0".

                I plucked these from our country's high school final exam from this year. Back when I was in high school, I did mine in 60 minutes without an error when the time limit is 150 minutes and I intuitively immediately knew how to approach each task since the moment I saw it. Also all needed formulas are supplied, you don't need to remember any of them.

                I plucked these because for these I don't have the immediate "know how" now, I still understand the topics, and could solve them with enough time, but it would require some thinking and thus I would be slower at solving them than when I was in high school, even though I'm pretty sure I could still ace it in the 150 minute time limit.

                But reality goes beyond high schoool... College-level math, like derivations/integrations, sums, algebraic proofs, is even harder and solving some of them could take me hours when I could do them in minutes when I was in college.

                With code it's the same. I could solve simple Python/Pascal/C++ high school level tasks as fast or faster than when I was in high school, even if I didn't write any code for a couple of years. But we also had assembly class in college, and I would struggle at assembly if I had to code it now, 10+ years later, even though I didn't struggle with it back then.

                • mewpmewp2 1 hour ago
                  > Think more stuff like "find the angle of lines defined by (x-4y-1=0) and (x-y-2=0)", "write the number 2026 in base 7", "solve an equation sin^2(x) - sin(x) = 0".

                  > I plucked these from our country's high school final exam from this year. Back when I was in high school, I did mine in 60 minutes without an error when the time limit is 150 minutes and I intuitively immediately knew how to approach each task since the moment I saw it. Also all needed formulas are supplied, you don't need to remember any of them.

                  It seems like with just a little bit of doing it again, you'd be back at the level you were though. Especially if you can do it with formulas right. You would be slower for only a very short amount of time. All those things are in my view if you understood them at some point in your life, you will understand them to the exact same extent with just a little bit of reminding. I would say with most of those concepts, it would take less than 1 hour to be back at similar level. Like for instance number in another base etc.

                  • black3r 29 minutes ago
                    Depends on the complexity of the task. That's what I tried to hint at by also mentioning college-level math. For the high-school level tasks yeah, couple of hours and I'd be as fast as I was in high school again. For the number in another base it could be as quick as less than 1 hour as again that's a simpler task than the other two.

                    For derivations/integrations it'd take more time. Less than what it took me to learn them in the first place, for sure. But still a lot more than 1 hour.

                    Cause I forgot how to "do them" in the first place, which is what the discussion was about in the first place. I still know the "theory" behind, so I can "figure it out" if needed without needing anyone to "instruct" me, or needing "classes" to learn how to do them. But essentially all the "practice" I had back then is forgotten.

                    And again the same goes for code and technology knowledge, which is what the discussion was about in the first place.

                    As a senior developer with 10+ years of experience I've already encountered situations where I needed knowledge I knew I had at some point, but already forgot. In my case as a backend developer working for the same company for 5+ years my favorite example is payments processing. There are tons of special/edge cases - e.g. how a failed recurring payment during a subscription is processed. That's something you set up once, then don't touch for years, and suddenly need to study again if a change is needed. How a subscription goes "past due", what you can do in that case, what your code actually does, how it reverts to correct state once a retried payment follows through, what options you offer a customer if his payment method expired and he wants to switch it, ...

                    And this is also a good example why "domain knowledge" and "code ownership" is a good thing in larger companies. Because under usual circumstances I don't have to deal with these, cause we have a dedicated person who's maintaining the payment-related code. I only fill in in urgent cases happening when he's on vacation.

                    And juniors designing stuff like this AI-first without properly thinking about all these cases won't learn all the edge cases this flow can contain. So if something goes wrong, you end up with nobody who has the "maintainer experience" for that code - you don't have the one person who is knowledgeable about that topic - everyone in your company is in the same spot as me - having to research the topics again to understand them enough to be able to debug the incident which happened.

      • phpnode 6 hours ago
        I used to be an expert at php but now I haven’t written any in over a decade, I can still read it but it would take me a little while to get back to where I was (hopefully I’ll never need to), same thing could easily happen due to ai
      • Jtarii 9 hours ago
        If your internet died you would likely be worse at programming that you were in 2020. I think is what people are getting at.
        • djyde 8 hours ago
          I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.
        • mewpmewp2 9 hours ago
          If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.

          But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.

          Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.

  • TonyAlicea10 10 hours ago
    “Money was never the constraint. Knowledge was.”

    The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.

    Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.

    If you don’t care enough to write it, why should I care enough to read it?

    • barankilic 9 hours ago
      I am really amazed at how we are really okay with LLMs writing code end to end (without human in the loop) / dark factory concept but when it comes articles, HN is suddenly against LLMs writing words. I do not see the difference between writing code and writing prose. Both have keywords, grammars, syntax, meaningful combinations (function or chaining in code / collocations in words). If we think that AI-generated words are not meaningful or easy to follow that same must apply to AI-generated code, which may be harder to read or understand since it is not written by human. Let's stop being hypocrites.

      Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.

      • 01100011 6 hours ago
        Who is the 'we' here? When did I become ok with LLMs writing code end to end or against LLMs being used to assist writers? I wasn't aware I held either of these positions.
      • avocabros 9 hours ago
        That's because the purpose of code is to be used, not to be read.

        The only purpose of the written word is to be read.

        • recursive 4 hours ago
          I've always tried to write code for future maintainers first. That is often me.
        • SoftTalker 4 hours ago
          That's the difference to me. Code is used as instructions to computers. Written human language is used to to communicate thoughts, ideas, feelings to other humans.

          I disagree with the premise that "we" are all OK with AI slop computer code however. Even if it's just for consumption by machines, for at least some developers it is a creative outlet.

      • unleaded 8 hours ago
        The purpose of writing is to get your thoughts across in words. A prompt sufficient enough to get out an article with zero chance of it adding things you don't mean has to contain as much information as the article itself would. Just write the article.
      • wiseowise 8 hours ago
        > I do not see the difference between writing code and writing prose.

        That’s the problem.

      • barankilic 4 hours ago
        Since I cannot edit my comment, I replied my comment. I did not mean to insult HN moderators. I am actually very happy that they are protecting HN by removing and flagging AI content. I only wanted to attract attention to the topic that for some areas AI is promoted but then for some areas AI is demoted and I do not get it.

        What I mean with "we" is that there is a general perception that using AI is okay and mandatory. This idea is becoming more and more prevalent in management positions and it disturbs me deeply.

        I got some replies since I commented, but I am still in the same mind. I did not see a strong refutation to my idea. Why are some people (I didn't want to use the word "we" again) are okay with AI use in code but not in prose? I know that they are not exactly same but they have some similarities. If we are unhappy with sloppy prose, why are we happy with sloppy, potentially buggy or hard to maintain code?

      • bontaq 6 hours ago
        This is a funny point. People don't want to read LLM code either, so who knows where that puts us.
        • djeastm 6 hours ago
          It puts us in a secondary, less-rare, less-valuable role out of the driving economic loop we've grown up in.
      • joenot443 4 hours ago
        I would say that the simple reason is that writing is often artistic and coding very rarely is.

        I don’t listen to AI music or watch AI videos, I don’t want to read AI articles

      • recursive 4 hours ago
        I've been opposed to all of it the whole time. But yes, let's stop being hypocritical.
      • gib444 1 hour ago
        > I am really amazed at how we are really okay with LLMs writing code end to end

        Speak for yourself. Sounds like you're mostly referring to CTOs

      • mpalmer 4 hours ago
        What hypocrisy is there in distinguishing between the qualitative value of prose vs code? They serve entirely different purposes; your failure to recognize that is no one else's fault.
      • 1287128 8 hours ago
        We are not okay with slop code. There was healthy and widespread dissent in 2024 and beginning of 2025. Ycombinator cracked down on the dissent first by installing another moderator and then by downranking and banning anti-AI people.

        What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.

        • beng-nl 6 hours ago
          Such unsubstantiated claims against the integrity of HN moderators should not be thrown around so casually.
      • alansaber 9 hours ago
        Slop is slop.
    • notnullorvoid 4 hours ago
      It didn't feel at all AI written to me. It's much better than the AI written junk that HN laps up without noticing.
      • watt 3 hours ago
        It is full of these short sentences that AI writing loves, sort of to feel "punchy". Normally you would copy-edit that stuff, join them up, have the writing have some rhythm. I agree with GP, the article is hard to read because it seems to have a lot of https://tropes.fyi/
        • notnullorvoid 1 hour ago
          Twitter is full of strung together short punchy sentences, and it spread to articles long before AI.

          Not discounting the possibility that it's AI, but it didn't have the same repetition, contradiction, and inaccuracies I notice in other AI content. Though even that isn't exclusive to AI.

    • perfunctory 33 minutes ago
      > obviously AI-generated article

      how can you tell?

    • clarkdale 4 hours ago
      LLMs are trained on real life grammar written by humans. Sometimes the characteristic traits you see by LLMs are written again by human hands.
    • 7402 4 hours ago
      Is it really so obvious? It didn’t seem AI-written to me.

      Every day I seem to encounter (and skip over in disgust) a dozen or so AI-generated articles at the top of web searches, but this wasn’t anything at all like those.

      • bonsai_spool 2 hours ago
        Even the title is likely AI-generated, as are all the subject headings. I worry we're all getting inured to these writing patterns.
    • aldanor 8 hours ago
      Not the factory floor. The receiving end.

      It wasn’t one bottleneck. It was all of them.

      Not the nuclear material. The pattern.

      Money was never the constraint. Knowledge was.

      ...

    • sph 8 hours ago
      #1 rule of slop: anything that can be written, can be AI-generated now

      #2 rule of slop: even posts critical of pervasive AI usage and how it's ruining the world can be AI-generated

  • Animats 13 hours ago
    > They can’t tell you what the AI got wrong.

    AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.

    This is not fun. It has no flow.

    • simondotau 12 hours ago
      I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.
      • Terr_ 12 hours ago
        I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed.
      • lelanthran 11 hours ago
        > I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.

        I can do that too. Most programmers can.

        That's because it requires less skill! Critiquing something is always easier than doing it.

        I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.

        That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.

        They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...

        • bradleyjg 11 hours ago
          That's because it requires less skill! Critiquing something is always easier than doing it.

          No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.

          • lelanthran 10 hours ago
            > Mediocre programmers always wanted to rewrite everything

            You are comparing writing something with rewriting something. You don't know what the difference is?

          • layer8 8 hours ago
            How is that “no” and “the other way around”? The desire to rewrite comes from the ease with which one can critique existing code for being “too hard” to understand.
          • ffsm8 10 hours ago
            You can't generalize that statement.

            There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"

            As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.

            But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.

      • neonstatic 12 hours ago
        Perhaps you have the psychological make up to thrive in this new environment. Glad it is working for you.
    • cbg0 11 hours ago
      It should have the same flow as reviewing PRs from humans.
      • t43562 11 hours ago
        Who really truly enjoys that and doesn't see it as a chore?

        I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.

        • sampullman 11 hours ago
          I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.

          I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.

        • cbg0 11 hours ago
          > Who really truly enjoys that and doesn't see it as a chore?

          This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.

          Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.

          Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.

          • t43562 10 hours ago
            > Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.

            Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....

      • fg137 10 hours ago
        Which is a really, really bad idea.

        Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.

        I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"

        btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.

      • microtonal 11 hours ago
        The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop.

        I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.

        But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.

    • catcowcostume 10 hours ago
      Anything AI generated is troll. There's no logic. It's just pattern repetitions. I don't get how supposedly smart engineers fall for it
      • smallstepforman 4 hours ago
        We humans cannot scan 100’000 articles looking for the golden nugget, the AI data mining can do it and present it in seconds. Obviously we need to verify the data.

        A couple of decades ago, we didnt trust compilers, we did assembly manually. Today is same barrier, some developers will explode with productivity while others will be left behind.

      • barnabee 9 hours ago
        Because a lot of engineering is pattern repetition, which is not very fun for engineers either, and LLMs can do it much faster?
        • skydhash 9 hours ago
          Not really. Any patterns got optimized and automated. If you’re still seeing patterns, then you need to look harder, because they will be similar onlu superficially.
    • solumunus 13 hours ago
      [flagged]
  • mawadev 11 hours ago
    I highly question the ability of companies to gauge the level of experience of any dev.

    The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.

    Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.

    What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

    Does that really make you senior or just politically versed?

    The pattern shows up the most whenever failing software pokes holes in perception.

    • gyomu 10 hours ago
      There are two kinds of developers.

      There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.

      And then there's the rest.

      Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.

      Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.

      Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.

      You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.

      • cloverich 5 hours ago
        I'd color this a little. I think there's also an engineering mindset some people have, and some don't. And over 10 years in, I'm still not sure if it can be trained or not. Some people are just really good at seeing the technical solutions in terms of engineering: Where does the data live, where does it go. How does it get there, how does it change. How does it break, how will we know, how will we fix it, how will we cope with its shortcomings. All of those questions to some people are a relatively quick and intuitive part of scoping and design. And for others its like a constant cliff they run into midway through their projects, or worse (and far more common) a set of bugs that are "tech debt" (for someone else to inherit) as the slap the "Mission Accomplished" on yet another project.

        I've seen people that are very proactive and generally fall into your former group, but also don't quite seem to think like an engineer. I really want it to be trainable - I am trying - but IDK if it is or not.

      • noisy_boy 2 hours ago
        > There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.

        I would rather tweak that a bit and say, we need a kind that has two things: 1. aptitude - not genius or 10x something just plain being able to think clearly and having problem solving skills 2. care i.e. not just dump whatever hack works in the short-term and declare victory. It doesn't imply that you obsess over perfection and ignore deadlines etc. But basic care about the solution being sensible, good code quality and not causing a new set of problem due to shortcuts. Both are things we routinely expect from programmers and I see less and less of them. #2 is rarer than #1.

      • hnthrow0287345 6 hours ago
        >It's almost impossible to turn someone from the latter group into the former.

        Only if you're constrained by the same short-term thinking as US businesses. The way to do that is more of an apprenticeship model when someone observes/works closely with someone from the first group over years.

        Even then, the businesses don't want to pay for that, and why should the workers give that away for free? They want people to churn out code because they've chosen to hire micromanagers that need constant updates and babysitting through communication.

        • luckylion 5 hours ago
          My experience is that that plainly does not work. I work with developers of both types, and the junior ones who are part of the first group are limited in their ability by experience, but they have an inquisitive mind and don't give up quickly when they encounter something they don't understand.

          Much more experienced developers of the second type just throw their hands up and give up (or now: turn to AI). I've worked closely with them to try and reform them. Maybe I'm doing it all wrong, but it has never succeeded.

          With the ones from the first group it can work that way: you can show them how you approach problems and they will ask questions and pick up patterns and you'll see them improve.

          > Even then, the businesses don't want to pay for that, and why should the workers give that away for free?

          Businesses would need a high likelihood that they can reap the rewards of upskilling employees. Why invest a lot of money and high-talent attention into someone who might quit? At the same time, I'll happily pay three times as much for a truly skilled senior developer. I think the employee's incentives are much more aligned: it will increase their market value, it's an investment into their wealth, not the business'.

          • hnthrow0287345 3 hours ago
            >My experience is that that plainly does not work.

            The apprenticeship model isn't in practice at any scale in software, I don't see how you could believe that. Practically every career start is self-taught or university to junior positions which is not the high-attention, one-on-one focus you'd get.

            >Why invest a lot of money and high-talent attention into someone who might quit?

            What happens if you don't and they stick around? You might say 'well, I'd just fire them' but then you are going to have a culture of people always having one foot out the door. And a high amount of position switching in the industry has led us to what we have today where people don't really stay and build for the long-term, and shoddy code bases also drive people to quit.

            An apprenticeship model also helps if you can do 3-5 year agreements for training where you see the most benefit from the person in the last 1-2 years.

            As good as it has been for my career, switching often probably needs to slow down (while raises go up) and apprenticeships go into effect for better quality training.

            All this assuming there isn't another major leap in AI competency though.

      • balamatom 5 hours ago
        Thank you for this.
      • bluefirebrand 5 hours ago
        > There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc

        You're framing this person as a good developer, and sure. Probably some people who behave like this are good. MANY people who are like this leave mountains of problems in their wake. It takes a very special person to be able to build good quality with this kind of approach.

        You're basically taking about someone getting the right answer on the fly at full speed

        It's much more likely to get a subtly wrong answer, which is then dropped on that second group to manage and maintain going forward, while the fast moving person is hurried along to go drop a subtly wrong solution on another project with another team. This has happened to me many times in my career

    • ivan_gammel 4 hours ago
      > Technically you don't have to be an employed developer to become a senior developer.

      Outside of a sufficiently large organization „seniority“ of a developer doesn‘t make any practical sense. So, technically you can assign yourself any label, but that would be weird thing to do.

      A freelancer is measured by portfolio, a computer scientist in academia by publications, an OSS contributor by the volume and impact of contributions. In either case, it‘s proportional to the effort spent on learning and building.

      Anyway, regardless of employment status the measure of your professionalism is not defined by only something you can learn from the books. Experience matters a lot: it‘s nearly impossible to succeed in stakeholder management or presentation of your solutions by reading anything. You need practice and feedback. Senior engineers aren‘t those who excel in writing code: fresh CS graduates are supposed to know algorithms better. Senior engineers can contribute at full scale of SDLC themselves and support others. That is much easier to achieve in a professional environment rather than working on amateur projects.

    • therealdrag0 3 hours ago
      Sure, we live in a society. Seniority is about your ability to make an impact, which generally requires social and organizational skills. It can be bemoaned as much as you want but that’s how the world works.
    • teaearlgraycold 10 hours ago
      > What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

      This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.

    • brabel 11 hours ago
      > Technically you don't have to be an employed developer to become a senior developer.

      That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??

      I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.

      You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.

      • lelanthran 11 hours ago
        > That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn.

        That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.

        • BoingBoomTschak 9 hours ago
          Yeah, but working in a team isn't something you can learn without doing.
      • rglover 8 hours ago
        I've never worked in a corporate environment beyond client projects.

        Picked up a book on XHTML (no, that isn't a typo) and CSS in 2007, just kept trying to build stuff I wanted to build and backfilling knowledge as I went. Not only is it possible, it's preferred. ~20 years in and I've learned how to build my own full-stack JS framework, deployment infra, a CSS framework, and an embedded database to boot.

        Not one drop of this would have been possible had I taken the traditional corporate track.

        • ivan_gammel 5 hours ago
          All of this is possible on a corporate track. Ability to build frameworks and tools do qualify a person as at least a solid mid-level professional, not having corporate experience and associated skills can be a pretty big gap in their CV.
          • rglover 51 minutes ago
            Possibilities are not results.
      • kaashif 11 hours ago
        Maybe they mean you can be not employed and build products yourself? Technically true, but that's like running your own surgeries or something, you're still doing surgery.
      • andrewstuart 11 hours ago
        Analogies to other professions give your argument an air of legitimacy, with none.

        There’s plenty of people in this world who are expert programmers without following any traditional path.

        “Oh yeah, like who”, you say.

        Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.

  • allending 13 hours ago
    There's a certain irony in that the article itself is quite clearly assisted by AI. Not a criticism per se as I don't have a problem with AI assistance, but food for thought given the material being commented on.
    • rezonant 12 hours ago
      The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well. It seems people use them to "polish" up their writing but in reality it would have read better if they hadn't.

      My current pet peave is using period instead of comma, as in:

      > My people lived the other side of this equation. Not the factory floor. The receiving end.

      Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.

      • lelanthran 11 hours ago
        > The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well.

        Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.

        Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.

      • ijk 2 hours ago
        It really feels sometimes like they were trained on too much short-form fiction or something. Really stunts their sentence and paragraph texture.
      • ykonstant 10 hours ago
        Unnecessary emphasis can get... quite comical... indeed.
      • SanjayMehta 12 hours ago
        People have also started copying the AI tropes, especially your period/comma example.
        • microtonal 11 hours ago
          I am not sure if it is necessarily copied. A lot of influencer-style people used some of these patterns (periods, not X but Y). So I'm not sure who is copying who?
          • throwaway219450 7 hours ago
            These patterns are learned from magazine articles and other long-form publications. The tendency to have unnecessarily pithy/hooky section titles is one that particularly irks me, but it's not like AI invented that. I was reading some DIY books that are published by a company that does a lot of web/magazine work and they structure the text in the same way (this is all pre-LLM).

            Content creators are starting to include these traits into their scripts now, too. It's uncanny when you (literally) hear it.

            • bonsai_spool 7 hours ago
              > Content creators are starting to include these traits into their scripts now, too.

              Why would you assume this when the more likely reasonable is that the 'content creators' are just pasting LLM output?

              • ijk 2 hours ago
                I feel like the problem is that it's both. We're sanding off the long tail of human expression. It's not profitable this quarter, you see. Faster to let the AI do it.
      • concinds 10 hours ago
        The uncanny valley is an attractor basin.
    • morningsam 12 hours ago
      Made me stop reading a few paragraphs in. I don't have a "problem" in the ethical sense either, but as the sibling comment notes, the way LLMs write is rather grating. To make matters worse, a) people seem to use them to add pointless volume / "filler" to their texts, so now I have to wade through pages and pages of this stuff, and b) I have no easy way to distinguish between an article at least based on novel human insights vs entirely LLM-generated from a "write me something about X topic" prompt. I don't think it's a stretch to say that the latter just isn't worth reading given the state of the art.
      • chneu 7 hours ago
        The filler stuff is really a huge waste of time and effort. I tried to Google weather Ranch Corn Nuts are vegan and every result in the top 10 was the same AI generated slop with 10 paragraphs that had nothing to do with what I was trying to find.

        All the top results had the same AI feel to them. The same format and structure.

        The best part? None of them said yes or not. None of them answered the question. They just listed common dairy and non-vegan ingredients to look out for. So, all that AI and nobody put in the ingredients list. Lol

    • rotis 12 hours ago
      I don't have a problem with AI assistance either, but this undermines the point the article is making. For me it is like a priest preaching gay sex is wrong and then being caught in bed with a male prostitute (snorting cocaine optional). Leaves bad taste in the mouth.
      • sph 8 hours ago
        Many such cases. Both the priest anecdote, and AI-critical posts being AI-generated.
    • A_D_E_P_T 12 hours ago
      Out of curiosity, what are you basing this on?

      The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.

      • allending 12 hours ago
        Sort of a taste receptor I’m sure many have developed now.

        The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.

      • bonsai_spool 8 hours ago
        What are the obvious tells? List them, because I think our sense of the tells may not overlap.

        This article is clearly LLM-generated, even the title. A key indicator is that it almost makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so humanity is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.

        • A_D_E_P_T 4 hours ago
          There are quite a lot of them: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

          But it's really just the usual ones that are truly obvious. "Not X but Y," em-dashes, "underscores the significance of X," and so forth.

          A terse sentence structure can be a tell, but, IMO, it's a weak one.

          • bonsai_spool 2 hours ago
            > There are quite a lot of them

            I agree - and they're on clear display in this article, I would say

      • lkm0 10 hours ago
        The blog post reads nothing like Hemingway. Here's a classic example: https://anthology.lib.virginia.edu/work/Hemingway/hemingway-...

        Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.

        LLM slop reads more like slide bullet points extrapolated to prose-length text

      • lelanthran 11 hours ago
        Blog posts aren't typically written like Hemingway.

        Find some pre 2020 that are, and you'd have a point.

      • Kerrick 6 hours ago
        https://awnist.com/slop-cop (via https://news.ycombinator.com/item?id=47806845) points out Staccato Burst, Dramatic Fragment, Colon Elaboration, and Short-Hook Paragraph. To me, those define the tone of this article.
        • A_D_E_P_T 4 hours ago
          Interesting tool.

          I'm not trying to defend the blog post, but I gave Slop Cop 775 words of an essay by Schopenhauer (translated into English) and got "15 patterns detected."

          I fear we're approaching the point where AI-written text grows indistinguishable from human-written text, unless the AI-user is exceptionally lazy and uses an obsolete model...

  • whycombinetor 13 hours ago
    >I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.

    >In defense, the substitute was the peace dividend. In software, it’s AI.

    Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?

    • Tade0 12 hours ago
      Not sure why that was ever the plan, as there are clearly not enough people.

      Also over here, east of 15°E we were fired all the same.

      I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.

      I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.

    • NSUserDefaults 13 hours ago
      Happy to help and eventually take over.
    • Madmallard 10 hours ago
      Pretty sure cheap foreign labor is more prevalent now than ever at every major tech company.

      They really, really do not want to spend money. Especially not on Americans and their health insurance.

      It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

      • dmix 7 hours ago
        America could just reduce their cost of living, optimize their healthcare, make domestic business more attractive etc instead of trying to ban everything to duct tape over deeper problems
        • Madmallard 4 hours ago
          What's your evidence that they could do that?
      • lotsofpulp 10 hours ago
        > It's really strange how we're just letting them get away with this.

        Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.

        > They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

        When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.

        “American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.

        • Madmallard 4 hours ago
          Yeah that's true.

          I think it's all bad and counter-productive toward a stable society though. I think economic sacrifices likely have to be made to ensure long-term viability. What we're doing now is accelerating the demise of everything. The entire planet even.

    • Nux 13 hours ago
      India for the most part.
      • Scroll_Swe 13 hours ago
        [flagged]
      • gitowiec 12 hours ago
        [flagged]
        • solid_fuel 12 hours ago
          Take your racist attitude elsewhere or even better, keep it yourself. The comment chain was only about where IT work is being outsourced.
    • neonstatic 12 hours ago
      It had to be H1B Indians and outsourcing to India. As a European, I have seen some "Eastern European devs" around, sure. But they were not present at every company I worked with. Indians were. Quality-wise, it was always the same story, but I'm not going to elaborate. Everyone who is ready to accept it, knows what I would be saying anyway.
      • codingdave 9 hours ago
        No, you probably need to elaborate on that. Because in my experience, the quality from people in India varies just as much as the quality from any other country, including the USA.

        What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.

        You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."

        Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.

        • neonstatic 8 hours ago
          Okay, since you insist.

          I'm not racist. I don't care about race. I do care about culture a lot. By culture I mean a set of "default behaviors" and values that people from said culture are more likely to exhibit. That's where my issues with Indians began and continue. Of course you are right that generalizing over 1+ billion people is a futile exercise. Intellectually, I agree. And yet, in my personal experience, certain behaviors and attitudes they have just keep coming up with frequency, that just doesn't match any other group of people I have been interacting with. I live a rather international life. I interact with people from many, many cultures. I currently live in a culture, that is completely alien to my own, and I love it. It's not a problem of closed mind or some kind of supremacy thinking. I am free from that.

          Specifically about Indians - I find that great many of them prefer memorizing over thinking. In the IT consulting days of my career, I noticed that they seemed to have 4-5 solutions, that they would apply to all problems. Whether the solution would fit the problem or solve it, was secondary. If it did, great. If it didn't, well that was someone else's problem. Half of my job was fixing stuff that an Indian "fixed" before me. The appearance of having fixed something was much more important than the actual fixing. It was all about appearances with them. While people in general seek recognition, I have never met another group of people who are so eager to lie and cover things up to gain some perception of short-term bump in status. It's not isolated to work environment. You see, I suspected myself of perhaps being racist in the end, so I would challenge myself to befriend Indians if I met any - just to see. Maybe I was being judgmental and wrong? The last time I tried it, the Indian man I met kept kissing my ass so much I had to cut him off. Why did he do that? Based on what he was saying, he saw me as someone from an "upper caste" (he projected his ideals of a successful businessman on me) and desperately wanted me to know how much I have done for him (I haven't done anything other than having a few conversations about life and business in general). Took me a while to understand that all this excessive praise and ass kissing was an attempt to elevate himself by proximity to something great. Needless to say I am nowhere as great as he portrayed me to be. Later I also found that half the stuff he shared with me was made up to impress me.

          Another feature of their culture is extreme pride. They will never stop talking about India, Indian culture, Indian food, etc. They expect you to praise it, be in awe. If you aren't, they will pressure you to change your mind. Since working with them was a universally appalling experience, I wasn't impressed, so that came up a lot. You see this pride and attention seeking everywhere online. A normal person will say "Hello", "Good morning". An Indian will say "Good morning FROM INDIA". It must be mentioned, because it must be noticed and praised. It's just tiring. There is a reason why so many are waiting for country-based filters on Twitter. You wouldn't have guessed which countries are most upset about this.

          I am certain that there are reasons and explanations for all of this and that there are many exceptions. As you have mentioned, there are so many of them, they can't all be like that. And fair enough. I just find all of this so tiring, that I don't want to deal with them at all. If 1 out of a 100 is a smart and pleasant person, they are still surrounded by 99 that I don't want to deal with. It might be sad, but it is what it is.

      • mondomondo 10 hours ago
        [dead]
  • anonzzzies 12 hours ago
    I saw academic rigor fall of a cliff in exchange for 'better job alignment' between end 80s when I had my first class after finishing highschool called 'Formal verification in software' on to beginning of the 2000s when I left giving the first class to new students 'Programming in Java'. All the 'teaching how to think' was replaced with 'how to get a well paying job'.
    • mschuster91 10 hours ago
      > All the 'teaching how to think' was replaced with 'how to get a well paying job'.

      Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.

      People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.

      • Sharlin 7 hours ago
        Costs are externalized, profits are privatized. A tale as old as the society itself.
      • lotsofpulp 8 hours ago
        The problem was the government providing the blank check loans with no underwriting. Without that subsidy from future taxpayers, incentives would be properly aligned.

        No lender would have been stupid enough to give 18 to 22 year olds $200k for bullshit degrees and sports facilities.

        The onus would have remained on employers and government to pay for education, rather than a certification, because they would have been the ones paying.

        • 999900000999 7 hours ago
          College should have never been presented as the only way to the middle class. In high school they shutdown my advanced trades class, maybe I could have been ready to hop into a decent job after graduation.

          I recently spoke to a young art school grad who talked about getting on disability over a life of the corporate grind.

          Who am I to disagree ? The Pentagon has never passed an audit, the government coffers are effectively a slush fund for defense contractors.

          At this point, I think a universal basic income is the only way.

          Not enough jobs exist for everyone. Poverty doesn’t need to exist

          • lotsofpulp 6 hours ago
            >College should have never been presented as the only way to the middle class. In high school they shutdown my advanced trades class, maybe I could have been ready to hop into a decent job after graduation.

            Not unless you were willing to compete on price against people in China learning the same advanced trades.

            • 999900000999 4 hours ago
              So I assume every time you need a leak plugged you ship your whole house to Guangzhou ?
              • lotsofpulp 4 hours ago
                No, I search a few YouTube videos, and buy less than $100 of tools and supplies that were made in China from Home Depot, and fix the issue. Or a plumber uses tools and supplies made in China to fix the issue.

                “Advanced trades” would be making and fixing the machines used to make the tools and supplies to fix the leak.

  • cladopa 13 hours ago
    People are not perfect. I went to Ukraine just days before the invasion. Travel and Hotels in Kiev had become extremely cheap. You asked the Ukrainians about the possible invasion. "Not going to happen" everybody said."Russia talks always aggressively, but never does anything".

    They did not properly prepare and as a result lost 20% of its territory in days.

    Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.

    Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.

    The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.

    It is about humans, not AI.

    • wiseowise 12 hours ago
      > They did not properly prepare and as a result lost 20% of its territory in days.

      Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.

      • jakub_g 7 hours ago
        According to [0] the military was basically doing under-the-radar preparations in the last few weeks before the attack, because the official narrative was that nothing's gonna happen.

        > A small group of officers at HUR, Ukraine’s military intelligence agency, did begin quiet contingency planning in January, prompted by the US warnings and the agency’s own information, one HUR general recalled. Under the guise of a month-long training exercise, they rented several safe houses around Kyiv and took out large supplies of cash. After a month, in mid-February, the war had not yet started, so the “training” was prolonged for another month.

        > The army commander-in-chief, Valerii Zaluzhnyi, was frustrated that Zelenskyy did not want to introduce martial law, which would have allowed him to reposition troops and prepare battle plans. “You’re about to fight Mike Tyson and the only fight you’ve had before is a pillow fight with your little brother. It’s a one-in-a-million chance and you need to be prepared,” he said.

        > Without official sanction, Zaluzhnyi did what little planning he could. In mid-January, he and his wife moved from their ground-floor apartment into his official quarters inside the general staff compound, for security reasons and so he could work longer hours. In February, another general recalled, table-top exercises were held among the army’s top commanders to plan for various invasion scenarios. These included an attack on Kyiv and even one situation that was worse than what eventually transpired, in which the Russians seized a corridor along Ukraine’s western border to stop supplies coming in from allies. But without sanction from the top, these plans remained on paper only; any big movement of troops would be illegal and hard to disguise.

        [0] https://www.theguardian.com/world/ng-interactive/2026/feb/20...

    • the-smug-one 12 hours ago
      I'd say that Ukraine were very prepared for the invasion, though? They managed to survive for the first 2 weeks, leading to a long-term war. The Donbas war had already been going on for 8 years, and I don't think Ukrainians were under some illusion that those weren't Russians.
    • blitzar 12 hours ago
      On the flip side, all around the world you have "leaders" talking about imaginary conflicts with foreign countries that we must spend billions (they have a friend who really should get the contract) to prepare for and if the other side (tm) gets in your whole family will be killed instantly.
      • fifilura 12 hours ago
        Killing of families is what happened in Ukraine in the Russia controlled territories.
    • teiferer 12 hours ago
      In hindsight, it's easy to be smart. You picked two examples where somebody said "never gonna happen" and then it happened. How about the countless examples where somebody said the same and then the thing actually didn't happen?

      Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.

      • hnfong 12 hours ago
        GP also probably had a sampling bias. The ones who were actually concerned about the impending Russian invasion presumably fled out of the country (or at least, away from the major cities to rural areas that probably see less fighting)
        • _heimdall 9 hours ago
          I was in a neighboring country in Europe at the time, not Ukraine, but we didn't see any Ukrainians move into our area until a few weeks after the war started.

          That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.

    • sofixa 12 hours ago
      > They did not properly prepare and as a result lost 20% of its territory in days.

      They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.

      • _heimdall 9 hours ago
        > While nobody actually believed Putin would be dumb enough

        I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.

        • anonymars 7 hours ago
          When the US warned, days before, of the imminent invasion, the broad reaction was still one of "the boy who cried wolf"
          • _heimdall 6 hours ago
            And I didn't understand why anyone thought the warning was wrong. Who sends their navy around Europe, collects 100k troops on the border, and ships blood reserves to the front for a training exercise?
        • lotsofpulp 8 hours ago
          It was clear when they captured Crimea.
          • _heimdall 6 hours ago
            Sure that plays a role for sure, though I don't think anyone in 2014 was predicting a broader invasion 8 years later.
            • lotsofpulp 6 hours ago
              Crimea was Feb 2014, and then in Jul 2014, Malaysia Airlines flight 17 was shot down by "pro russian separatists" in eastern Ukraine. Seems like they were laying the groundwork for a while. I have no idea what portion of this wikipedia article is true:

              https://en.wikipedia.org/wiki/Novorossiya_(confederation)

              But it seems Ukraine had questionable control of the eastern parts of the country long before the 2022 "official" invasion.

    • vasco 12 hours ago
      > Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer.

      Why would we listen to anything related to right or wrong from you then if you don't care?

  • zero0529 12 hours ago
    Every day Peter Naur’s paper programming as theory building gets more relevant

    Link: https://gwern.net/doc/cs/algorithm/1985-naur.pdf

    • apitman 7 hours ago
      This was the first article I thought of as well. Highly recommended read.
  • VikRubenfeld 4 hours ago
    These LLMs are great for now, but they have to go by their training materials. And if people stop creating new ways to code, new languages, new coding patterns, etc, then the code LLMs produce will be stuck in 2026 forever.
    • linhns 3 hours ago
      Someone will use LLMs to create new languages, from which new patterns will arise.
  • neuderrek 12 hours ago
    I remember same complaints about junior engineers copy pasting snippets of code from StackOverflow without understanding. And without curiosity to understand, without code review and mentorship from senior engineers they never grew to the senior level. But that is only some of them, others used StackOverflow to learn, did not use the snippets without understanding them first and properly adapting to their context, and they got good coaching in their teams and now have reached senior level from there. I see the same dynamic with LLMs, just more opportunities for both juniors to learn more by following up, and for seniors to to create tooling to enforce better architectur, test coverage and fault resiliency.
    • isodev 12 hours ago
      I think you're missing the point. Nobody removed people thanks to their SO copy-paste skills. If anything, more folks were hired to troubleshoot and sort out any copy pasta blunders (since you actually need working software, at the end of the day).

      With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).

  • RossBencina 13 hours ago
    Excellent post. Two stand-out points are deskilling through abolition of apprenticeship (or equivalent progression through the rank and responsibility), and loss of institutional knowledge, especially tacit knowledge stored in individual people. These are people problems more than they are technology problems. Without continuity of process and practice stuff gets lost. Sometimes change really is progress, for example software safety and security practices have progressed over the past 50 years, but other times change is just churn, or choices driven by misaligned incentives which will bite later, as the article describes.
    • RangerScience 13 hours ago
      What comes to mind is how the cure for scurvy was simply… forgotten, causing it to come back.
  • slibhb 2 hours ago
    Yes, and textile factories involved people "forgetting" how to weave. Whitehead:

    "Civilization advances by extending the number of important operations which we can perform without thinking about them"

    It remains to be seen whether this implies some kind of constraint on human progress. I doubt it.

    • rini17 2 hours ago
      If we suddenly needed huge amounts of textile or clothing, we would run into same issues. This is not about progress as such.
    • lowsong 1 hour ago
      We're not talking about replacing "writing code", we're talking about replacing your ability to think about the problems critically at all.
      • slibhb 1 hour ago
        People said the exact same thing about computers.
        • lowsong 45 minutes ago
          So?

          The research on AI is showing again and again that people that use AI are losing their skills not just in the specific task but more generally. This isn't a "change of skills" it's a fundamental reduction in the skills of knowledge workers.

  • b00ty4breakfast 3 hours ago
    At least here in America, we've been offshoring coding since at least the late 90s. Often, that code isn't so great (this is not an indictment of foreign workers in general but these off-shore operations are not always on the up-and-up).

    And just like offshoring dev work, we may see the rebound effect when there's all kinds of poorly written LLM outputs in production and companies are running around trying to re-hire high quality devs to fix all these fires that they themselves started.

    • linhns 3 hours ago
      And when foreigners reach some levels of decent expertise, their jobs are offshored to cheaper nations again.
  • raincole 12 hours ago
    First of all this is clearly AI-assist writing (being charitable here).

    And the premise makes no sense anyway. The only risk of forgetting how to make shells is when other countries are making shells more efficiently. Non-western countries are not going to reject AI-coding, nor are they going to make software more efficiently by hand.

    • 0xpgm 12 hours ago
      Programmers in non-western countries may not be able to afford $100 per month on vibe coding.

      They may keep taking the longer and harder route of a mixture of AI and hand coding.

      • pilgrim0 4 hours ago
        The same applies to the south. It’s shocking to read tales of people spending hundreds of dollars monthly with coding agents, that’s wholly impossible for the vast majority of devs in South America, even 20 dollars is hard to justify for most households. By economic factors alone, I bet there are a lot more people learning the hard skills in places they can’t afford to be dependent on the tools.
      • alansaber 9 hours ago
        They'll find a way. If it's not the chiptole bot, the enormous volume of low-effort AI implementations will provide a free token layer.
    • lpcvoid 8 hours ago
      >Non-western countries are not going to reject AI-coding

      If they are smart, they will. And I think they are smart.

  • sminchev 1 hour ago
    This is like saying: nobody can do woodworking with with manual tools anymore these days, because there are machines since 30 years that do woodworking.

    There will be always a room for good developers.

  • frogperson 2 hours ago
    There is no incentive to work hard for most organizations. They will take credit for all your work and maybe give you a $1000 bonus at the end of the year, if they havent laid you off by that time.
  • osigurdson 6 hours ago
    The author calls it out at the end but still spends a long time creating a false equivalency. Offshoring to humans in another country is not the same as having machines do work domestically. Really, the only thing that matters is:

    "Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t."

    Of course, we are all wondering if AI will be good enough in 5 to 10 years such that you don't have to look at the code (at all). If so, then very few programmers will be needed it seems. If not, its possible that roughly the same number will be needed.

    It seems oddly binary to me since as soon as you need to understand anything about the code, you have to effectively onboard yourself to a foreign codebase and develop the needed context.

  • 0xbadcafebee 4 hours ago
    AI isn't making us forget, and we aren't in the process of forgetting. We forgot, past tense, in 2015.

    The rise of coding bootcamps destroyed the historic knowledge and expertise of professional software developers. Waves and waves of people joined the tech workforce, without taking the years of experience required to learn how programming, and professional software development, should work. The result was a lot of really bad code, and a lot of reeeeeally bad product decisions.

    Since 2018 I haven't met anyone who has read an entire technical manual on a framework, library, or tool that they use every day. By 2020 I was meeting engineering managers who said they wouldn't let engineers use a technology if they couldn't find StackOverflow snippets for it. I still meet "Senior" engineers who don't understand the most basic professional methods, like how Scrum, Agile, or Kanban actually work, and why you shouldn't just make things up as you go. Hell, the entire industry developed a collective psychosis preventing them from understanding the word "DevOps", because everyone switched entirely to learning by reading false blog posts written by clueless amateurs and upvoted in an echo chamber. If you never learn properly, and repeat misconceptions, you won't do good work.

    We neeed a professional software development license, the same way the Trades have licensed plumbers and electricians and framers. We need people to apprentice under a master engineer, so they are guided by people who know what to do and what not to do. And we need formal tests to ensure businesses don't hire clueless people who passed a two week course to write critical software. Of course nobody wants to do this, and that's why it's so necessary.

    • auggierose 35 minutes ago
      > Since 2018 I haven't met anyone who has read an entire technical manual on a framework, library, or tool that they use every day.

      That presumes that they still build frameworks, libraries and tools with technical manuals worth reading.

    • conqrr 4 hours ago
      License makes a lot of sense. Many engineering fields have licenses. Needs to be made mandatory. Tech rapidly evolves which was an excuse. Well, a lot has stabilized now in terms of technologies.
  • p0w3n3d 1 hour ago
    This Is exactly how civilizations die out. When AI was incoming I wrote similar predictions. This WILL happen, sooner or later. People will stop understanding what programming is. What is hard drive. Why storing files in cloud is not the same as having them on your HDD. And similar things
    • notepad0x90 1 hour ago
      what other civilizations died out this way?
  • tim333 5 hours ago
    The thrust of the argument seems the wrong way around. It's roughly:

    - Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.

    In reality if they'd used AI, the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.

    The article seems AI written with punchy sentences and mixed up logic.

  • pabs3 13 hours ago
  • tjwebbnorfolk 14 hours ago
    You could say COBOL has had this "problem" for 40 years also. That's why we need to constantly be inventing new ways of making things. The old ways are always forgotten over time.

    If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.

    • LeCompteSftware 13 hours ago
      The point of the article is that sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis. The reason US/EU manufacturing was "the old ways" is that people could make easier money with financial engineering, an insight that extended all the way to Raytheon.

      COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.

      As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":

        The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
      
      At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compiler
      • tjwebbnorfolk 1 hour ago
        > sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis.

        yes of course. that's why I said > If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost.

        This is all known, because it's always been this way. You can't hire a blacksmith, you need to first REMAKE the blacksmith if you really need one. It's always been this way, and it will continue. There is a cost to resurrecting old processes. This cost is a fact of life and needs to be planned for.

        It cannot be avoided except by maintaining some kind of "strategic reserve" of thousands or millions people who sit around building things nobody wants on the off chance they might be needed again -- which a democracy will not long have the patience to continue paying for.

      • tjwebbnorfolk 51 minutes ago
        Many things we must know to do our jobs are themselves artifacts of historical decisions made in a time and place that no longer makes sense, but we have to know them to do our jobs.

        Claude has allowed me to jettison many useless (IMO) skills I've developed over the years. I'm quite happy to let my bank of CSS and regex trivia expire from the cache, never to be reloaded again. I will never have to write another webpack.config.js as long as I live. So much time in programming is spent looking up SDK operations that I basically know, I just can't remember whether the dang method is called acquire_data() or load_data() .... etc

      • anonzzzies 12 hours ago
        But these are hard IT things a human programmer really struggles with as well. What % of software written is that? Very very low. Most software is dull and requires business vagueness to be translated into deterministic logic and interfaces; LLMs are pretty great at that as it is. If humans use their old ways to fix complex problems and llms do the rest, we still only need a handful of those humans. For now.
        • LeCompteSftware 12 hours ago
          "For now" is sort of the entire point of the article :)

          Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.

          With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.

          https://en.wikipedia.org/wiki/The_Mythical_Man-Month

          • anonzzzies 12 hours ago
            These are good point and I am not overestimating; we are simply seeing the productivity boost in our company and the rise in profitability. We practice TDD, but only at integration level, so we have tests upfront for api and frontend and the AI writes until it works. SOTA models are simply good enough not to do;

            function add(a,b) = c // adds two numbers

            test: add(1,2)=3

            to implement

            function add(a,b) return 3

            So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.

  • sutterd 2 hours ago
    I’m working on a side project and AI is writing all the code. The code it produces is not good, and this comes from someone who has experience producing bad code. One thing I’m worried about is places like GitHub being full of AI code, which leads to AI being trained on AI code. It seems like this will lead to a downward spiral.
  • rbbydotdev 10 hours ago
    Needn’t worry, such incompetencies are rooted out by the 8th or 9th round of interviews.
    • alansaber 9 hours ago
      A key pain point addressed by Cluely or some such
  • threepts 5 hours ago
    Just like how "The West" offloaded most of its manufacturing to China as people don't have sew their own shoes.

    I see this as a sign of increase in productivity, the important software will still have a human centered development team, but we don't need another dev team on say, tinder for dogs.

  • Tade0 12 hours ago
    > The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

    Well then train them, instead of selecting 0.18% of applicants and calling it a day.

    It's not some innate, immutable property - people can be taught even in adulthood.

    Also it's not like they'll work for a year and switch jobs - not in the current market.

  • heinternets 13 hours ago
    When you've run out of ideas just portray "the west" as some monolithic portrait in some decline-porn fan fiction as clickbait.
    • simonask 10 hours ago
      I'm so tired of this, it's such a lazy take. "The West" is a giant, incongruous collection of wildly disparate nations and cultures with wildly different circumstances, policies, histories, and cultures.

      It feels a lot like someone has a cursory understanding of American politics, and thinks the US is somehow representative. It's not, it is an outlier by every statistical measure. If you want to understand the world, you need to start by forgetting everything you know about the US.

  • doginasuit 8 hours ago
    I write all my own code and then run it by the LLM for analysis and suggestions. I'm probably not a 10x developer, but this has 10x'd my own progress. I make fewer missteps and build a deeper understanding of the problem space. I used to brace myself for a lengthy debugging session whenever testing a new section of code and now it tends to run as expected, more often than not. It may take a few years but I expect people who are using LLMs in the backseat will pull out ahead as the leaders of the future of software.
  • linsomniac 4 hours ago
    This post uses the war machine to demonstrate AI=bad.

    I'm far removed from the conflict in Ukraine, but from the reporting it seems like they are making extremely good use of well understood, inexpensive technologies like drones with mundane munitions.

    I'm sure Stinger missiles have their place in the battlefield there, but a $120K stinger doesn't seem like a very good countermeasure against a few thousand dollar drone.

    So, counterpoint: We also need to understand how to embrace the changing face of software.

  • bonsai_spool 8 hours ago
    Very frustrating to have an ‘article’ so heavily AI-written take up this much space and attention.

    What’s really happening is that we are all forgetting how to think

  • phillipcarter 3 hours ago
    Just a small datapoint, but:

    > Salesforce said it won’t hire more software engineers in 2025.

    Some headline somewhere reported this, but Salesforce plenty of engineers (in the US at least) in 2025. One of them is a junior engineer on one of my scrum teams.

  • tim333 5 hours ago
    The thrust of the argument seems the wrong way around. It's roughly:

    - Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.

    In reality if they'd used AI the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.

    The article seems AI written with punch sentances and mixed up logic.

  • Brian_K_White 23 minutes ago
    It's the trusting trust problem cubed or worse.
  • threepts 5 hours ago
    "institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing "

    This article passes blame to AI for developers not learning because they are not being actively hired. You do not need to be hired to learn something. You need to learn something in order to be hired.

    • techblueberry 4 hours ago
      Most people don’t know shit before they get a job. All the most gifted developers I’ve hired or worked with were idiots on day one. Less dumb than than everyone else, but I don’t care how many hackathons or coding competitions or open source projects you’ve contributed to, there’s so much education dealing with real problems and real coworkers in a real work environment.
      • threepts 4 hours ago
        I agree, most human developers have an exponential learning curve but I also acknowledge the fact companies are mostly driven by shareholder pressure to maximize earning potential and are not to keen on funding that curve.

        They just want to quick results for their 3rd quarter report and AI does that, without much investment (for now atleast).

  • rbren 1 hour ago
    Forgetting how to code is not a bad thing. Forgetting how to make software is.
  • htunnicliff 4 hours ago
    > I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.

    >

    > In defense, the substitute was the peace dividend. In software, it’s AI.

  • culanuchachamim 5 hours ago
    Well, I think it's a good sign that society forgot how to make weapons. The sad thing is that they need to re-learn again.
  • TeMPOraL 12 hours ago
    The article makes no sense, and stars with a very wrong perspective on things.

    This kind of forgetting is normal. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.

    But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they could get them if they really needed them, but it's gonna be costly.

    I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.

    This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".

    • 0xpgm 12 hours ago
      > This kind of forgetting is normal

      Just as shift in power and the rise and fall of nations is normal.

      • TeMPOraL 7 hours ago
        Yes. Again, this will eventually happen to every one, some way. Of course nations always want to prevent this; it's part of the job of the government. But there's always long tail of very low probability, very destructive threats. You can't possibly safeguard against them. In fact, trying to do so is a sure way to trigger a fall of your nation (or at least your government), by draining your economy dry due to paranoia.

        The rational thing is to address a threat proportionally to it's expected damage and probability of occurrence. When war is unlikely, you scale down your defense production; when it becomes more likely, you ramp it up - paying cold-start cost is still much cheaper than paying for ongoing readiness. If your scaling down defense makes it more likely for you to be attacked - well, that's the job of your intelligence and defense departments to track. Nobody said it's a static system - it's a highly dynamic one, that's what makes geopolitics a hard thing.

      • Terr_ 12 hours ago
        For that matter, a lot of human civilization has been about identifying things that were normal and making them rare. "Normal" infant mortality of 40%, famines, floods, history being lost, etc.

        Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:

        1. "This specific case should not have taken certain people by surprise."

        2. "This is a manifestation of a broader phenomenon."

        3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]

        • TeMPOraL 7 hours ago
          In the specific case discussed in the article and comments, I'm advocating for another interpretation:

          4a. "If a process is unlikely to be needed any time soon, shutting it down and then paying cold-start costs if and when it's needed again, is better than keeping it going and wasting resources better used elsewhere", and

          4b. "There's an infinitely long tail of low-probability problems, and you can't possibly afford to maintain advance readiness for any of them".

          Also on the overall sentiment:

          4c. "Paying a cold-start cost isn't a penalty or sign of bad planning. It's just a cost."

    • gblargg 10 hours ago
      My thought as well. Imagine the cost if we kept active every production line of every obscure thing we haven't needed in decades. It's unreasonable to think that we should still be able to make these easily. It would hamper development of new things.
  • fauigerzigerk 9 hours ago
    The defense analogy makes absolutely no sense. All the examples are of production shutdowns or reductions. Knowledge was lost because people retired and not replaced at all. None of it was lost to automation.

    Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.

    Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.

    Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.

    • matwood 7 hours ago
      Exactly. The US hasn't forgotten how to manufacture, in fact a ton of manufacturing happens in the US. What's happened is that it's been automated. And automation is one of the better ways to extract knowledge from a person who will one day switch jobs, retire or pass away.
  • netfortius 12 hours ago
    This is why a comprehensive computer science degree is necessary. Seeing and working only with the trees leads to destroying some forests, eventually.
  • bit1993 13 hours ago
    Yes. Just like globalization created companies like TSMC, AI will do the same. Software engineers who don't rely on LLM code generators will have a moat because they can do it cheaply and sustainably.

    Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.

    • zelphirkalt 12 hours ago
      I am not so much convinced by your last point, that point of new languages and frameworks. I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models.

      I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.

      • bit1993 11 hours ago
        > I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models

        But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.

  • Scroll_Swe 13 hours ago
    "the west" ?

    You mean the world?

    Deepseek was being glazed here, Im sure chinese programmers use it like CC

    • Terr_ 11 hours ago
      To be charitable to TFA, there are a dearth of accurate and well-understood labels for the kind of X versus Y they want to make between national economies.

      Even "First/Third world" has been fraying at the edges for decades since it was originally about political alignment.

  • outloudvi 4 hours ago
    The article speaks well but the situation for coding is more severe.

    Shells are not needed once they are not in needed. Code does not: customer need is always there.

    Before forgotting how to code, The West will first get round up by their own Monsanto, voluntarily.

  • eolgun 11 hours ago
    The Fogbank example is the most chilling part. It's not just that they lost the people — they lost the ability to know what they didn't know. Nobody could even write down what was missing because the knowledge was never formalized in the first place.

    The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.

    The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.

    • muragekibicho 10 hours ago
      The junior hiring collapse is all so bizarre. I graduated recently and my career prospects are jarringly limited.

      I thought I'd go back for a Masters/PhD but then Trump mercurially defunded lots of STEM grad programs. Ngl, I found myself stuck. Zero job openings, zero PhD program openings. It's all so frustrating.

  • Liftyee 10 hours ago
    I wonder if the real problem is short-term thinking in culture and incentivised by markets. By optimising next quarter's profits over investing in long-term growth and capability, things like this happen.
  • jmull 8 hours ago
    I don’t see it as so dire.

    Software developers have been learning what they needed to know to do the job the whole time. That’s pretty much the job description.

    What you need to know has changed a lot recently. Like always.

    > The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

    That’s certainly not true. I’d take a hard look at my hiring process if it was performing this inefficiently.

  • drzaiusx11 6 hours ago
    The west did not "forget", it merely decided to participate in the "commoditization of everything" which has the exact outcomes we expect and easily observe today: skilled labor becomes unskilled labor in service to the machinery of capital.
  • ianberdin 10 hours ago
    I don’t know. Partly true. I came to web development, when low level things solved: frameworks, ORMs, OSs, databases. I don’t know sql nor c++ well. But I can create a system, a value based on the abstractions. Everyone told me: Ruslan, you don’t know SQL, what a shame! Well I do not have problems and did not have about it.

    Probably we are going to be fine with AI abstraction too. People will use it, stuck with problems, dig deeper, learn, improve, same as we had with frameworks and its source code.

  • RataNova 4 hours ago
    The worrying scenario is not that AI writes code. It's that companies stop creating the people who can tell whether the code is any good.
    • therealdrag0 3 hours ago
      Do you think judging code quality can’t be automated?
  • Arun2009 6 hours ago
    The west also "forgot" to calculate by hand. Because they invented calculators and computers.

    The west is not merely forgetting to code. It is creating systems that can code. They aren't standing still. They are progressing to a higher level of production.

    • dkrich 6 hours ago
      This raises a good point. The analogy in the article implies that eventually there will again be a need to know how to write code at a large scale and nobody will know how to do it. I don’t think the analogy holds if you think of AI as a sort of orchestration and abstraction layer which at the end of the day, all software development tools are.

      But I do think there’s another thing going on quietly in corporate America currently that will have major ramifications for companies that have prioritized using AI and that is a loss of technical excellence in general.

      I can’t put my finger on it but sometime around 2023 or so there was a noticeable falloff in technical competence at companies I work with because the higher ups went all in on a generative AI future. No longer were they investing in training new hires and having rigorous certification standards. Instead people were encouraged to use AI tools to answer questions and would regularly pass off the output to more knowledgeable workers for refinement. These people clearly had no idea whether what they were sending out was accurate or not but it looked and felt like real work.

      I think there will be a consolidation across the tech industry and AI will not be a differentiator and only those who are actually competent will succeed but right now AI is allowing a lot of incompetence to go undetected throughout a lot of organizations.

  • mtzaldo 2 hours ago
    This is a great article, make sure take a moment to read it and digest it.
  • mahrain 11 hours ago
    I don't agree with "the west forgot how to make things", it moved supply chains for cheap consumer goods to asia, but in the B2B space a lot of things are manufactured in Europe: companies like Bosch, Volkswagen, ASML, Alstom and Airbus are cranking out extremely complicated machines that last many years in demanding environments. It's just a different level of value-add vs. low cost electronics (for instance).
    • joker99 10 hours ago
      I remember Covid and the supply chain crisis that unfolded in Europe and the west. Most of the companies you’ve mentioned weren’t cranking out anything during that time as all of them realised that "low cost electronics" are not always readily available and that we forgot how to make them or don’t have the capacity to produce them in significant numbers anymore ourselves. A lot of basic electronic components were not available during that time and we still haven’t fully grasped the complexities of our supply chains and where they begin.

      I also remember, that EE for a while stopped using the term "jellybean parts". Turns out that most jellybeans are produced in Asia.

  • recvonline 2 hours ago
    Funny enough that this article is written with AI.
  • SillyUsername 9 hours ago
    Space programmes have this issue too - everything had to be relearnt and un-obsoleted for Artemis Moonshots
  • user2722 11 hours ago
    If the system treats you as a number, you should become a mercenary.

    I love this articles who all the coders read but none of the management.

    If possible, be a mercenary and put a high number on your expertise, so we can solve this management blind spot faster.

    If you can't, let your life/work's passion be "not starving to death", and try to change it politics-side.

  • agentbc9000 9 hours ago
    Chinese models run around $2 to $8 dollars per million tokens - Claude is 10x that cost - when will the bean counters move to chinese models, USA bans these models for national security reasons - Anthropic, Openai, Meta, X all move to China where the models will be cheaper
  • efitz 13 hours ago
    I disagree with the premise - interesting but I interpret the same fact pattern differently.

    The history of technology is the replacement of manual processes with automated ones.

    Consider a very basic process: checkout of a restaurant.

    Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.

    This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.

    Did we “forget” how to do checkout? No. We purposely changed it.

    But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.

    Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.

    Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.

    With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.

    We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).

    We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).

    Will we “forget” how to “build” code?

    No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.

    Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.

    But the productivity gains we get will be so great on average that no one will go back to doing things the old way.

    There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.

    • drawfloat 13 hours ago
      Everyone is taught at a young age how to do basic addition and multiplication. That's all check out requires. People are not taught at a young age how Rust lifetimes work or how to write human maintainable code.

      I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.

    • latexr 12 hours ago
      Though I do believe you are making them in good faith, I find those comparisons do not hold.

      CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.

      With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.

    • rglullis 12 hours ago
      The point you seem to be missing is that focusing only on optimization makes us all fragile to system shocks.

      > Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.

      Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.

      For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?

  • alecco 13 hours ago
    Speak for yourself. I now dare to code much harder problems and learning is bliss. No more having to sit down to dig needle-in-haystack through horrible documentation or random Stack Overflow posts.

    LLMs are a magnificent tool if you use them correctly. They enable deep work like nothing before.

    The problem is the education system focused on passivity (obeyance), memorization, and standardized testing. And worst of all, aiming for the lowest common denominator. So most people are mentally lazy and go for the easy win, almost cheating. You get school and interview cheating and vivecoders.

    But it's not the only way to use LLMs.

    Similarly, in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.

    • arexxbifs 6 hours ago
      Perhaps the approach to, and leverage from, using AI is different for someone who's been active on HN for two decades, and junior devs who've been brought up on iPhones in the flawed school system you're describing?

      As TFA says, the problem is that accumulating knowledge takes time and effort, and the AI hype and expectations on LLM-assisted coding helps with rationalizing ever more short-sighted decisions that squander or hinder that process.

    • rglullis 11 hours ago
      > Speak for yourself.

      Even if you are the absolute unicorn who gets paid to "code much harder problems" and "learning", the rest of the industry exists to deliver actual products and services.

      So unless you nurture some type of https://xkcd.com/208/ fantasy, this is not just about you. The industry as a whole needs to find a way to work with LLMs without automating programming away entirely, and the industry as a whole needs to find a way to ensure that newcomers are able to be productive even if code-generation tools are taken away from them.

    • eszed 10 hours ago
      > in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.

      I'm not saying you're personally doing anything wrong, but there's a parallel here, when smart and curious people read articles about history and literature and art and science, rather than engaging directly with the real thing.

      Or then the next level down, where creating amazing work in all of those domains depends on enough "slack" in the system for people to pursue deep work that will not be immediately profitable.

      Do you see where I'm going with that? We (and I'm very much including myself: here I am on HN, instead of reading something more substantial) skim the (Wikipedia) surface, instead of diving truly deep. AIs (right now) are the ultimate surface-skimmers, and our fascination with and growing reliance on them reflects something in our current surface-skimming cultural mindset.

      • alecco 10 hours ago
        I meant it as a simple to understand parallel. Absolutely deep reading and thought is much better than Wikipedia or an LLM chat.
  • bsder 13 hours ago
    > Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.

    I'm going to steal that one and add it to Stross': "Efficiency is the reciprocal of resilience."

    • californical 13 hours ago
      Yes that is one key that resonated with me. The author did a great job of putting these recurring concepts into their own words

      The other that really resonated was something that I read before along the lines of… we think that once humanity learns something, that knowledge stays and we build on it. But it’s not true, knowledge is lost all the time. We need to actively work to keep knowledge alive

      That’s why libraries and the internet archive are so important. Wikipedia, too

  • andai 4 hours ago
    Basically, we forgot the human neural nets must also be trained.
  • crusty 4 hours ago
    I feel sad that evening I read now has to pass through my "is this LLM slop?" filter, and if it gets caught, the content loses focus and the worthless puzzle of truth takes over.

    So there was this: "I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now."

    The filter flashed the warning on the telltale signs and I stopped reading. Now I've got the puzzle I don't want to do. Did someone trying to argue against "AI assisted" coding use an LLM to author that argument?

    But this is HN, I can also just move on to the next story.

  • Meirambek_VIDI 14 hours ago
    Do you think this is a tooling problem or more about incentives and how engineers are trained now?
    • great_psy 13 hours ago
      I think the article is making the point that it is a cultural problem about cost cutting and short term thinking.
      • Meirambek_VIDI 12 hours ago
        Yeah, agreed - short-term incentives seem to drive a lot of this. Do you think tools can help, or is it mostly cultural?
        • great_psy 6 hours ago
          Give me a python script that takes a string representing the output of a sha256 algorithm and a plain string and compares if the sha256 of plain strig matches the sha256 provided.
  • skybrian 13 hours ago
    There was a time when companies had terrible development practices and could forget how to build, test, and deploy software, but is anyone seeing that now? We have much better development practices nowadays.

    It doesn’t seem much like defense industry problems.

    • disgruntledphd2 12 hours ago
      This still happens. Lots of my career has been figuring out what code is actually running in prod, and determining if it even works.
    • IronyMan100 10 hours ago
      IMHO, it's a people Thing. People developed better practices, talked about IT in conferences, maybe left the company. AS a result the knowlegde spread. On the other Hand, If the places where a skilled individual can work and honey their skills, the knowlegde become scarce, the knowlegde cannot spread anymore and it will vanish. If you only program with AI and 5 people do the Work of 100, then you end Up in such a scenario.
  • imrozim 13 hours ago
    How do you become a senior engineer if no one hires you as a junior anymore.
    • hkt 12 hours ago
      Talk confidently in your interview with non-technical managers when the last senior has left and there's nobody there to check your work.
      • blitzar 12 hours ago
        So the same as it is now, be a good salesperson.
  • bakugo 1 hour ago
    The West apparently also forgot how to write articles without AI.
  • wg0 13 hours ago
    >The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

    I see a talent pipeline collapse in next 5 years. "Software engineering is over coding is a solved problem" as being chanted by semi literate media and the AI grifter's marketing departments would further scare away the allocation of human capital to software engineering easily commanding 3x rise in salaries due to resource shortage.

    • wiseowise 12 hours ago
      First it was “learn to code” and bazillion videos of TikTok schmucks showing off slacking at work, now everything is solved. The puzzle is complete.
  • chromacity 6 hours ago
    I just can't get over the incredible irony of so many of these "AI is bad for you, mmkay" articles being LLM-generated.

    If the author sincerely believes the thesis that AI makes you vulnerable / dumb, they are either incredibly hypocritical. But more likely, they're just cynical and trying to get traffic to their website. And you're not getting back the time you spent reading this and arguing with it.

  • resters 7 hours ago
    Just like we forgot how to shoe horses, drive elevators, and mill corn!
  • chvid 9 hours ago
    I have forgotten all about Apache Wicket - will this cause the downfall of western civilization?
  • julik 7 hours ago
    But - albeit briefly - a lot of value for the shareholders has been created
  • gamblor956 2 hours ago
    America still makes a ton of stuff. California is still one of the leading manufacturing locations in the world. California isn't #1 in any industry (outside of aerospace) but is in the top 10 for pretty much everything else (in the U.S.). It's been the world's #4-6 economy for the past two decades.
  • AHTERIX5000 12 hours ago
    Is this written by a real person though?
    • lioeters 10 hours ago
      We're forgetting how to write too, apparently, and with that, forgetting how to think for ourselves.
  • roenxi 13 hours ago
    > Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate.

    It's minor but this is just wrong. If you're going to hire 4 candidates, there could be 2,253 perfectly qualified candidates even if only 0.18% get hired. The conversion rate is meaningless; it just tells us how many jobs were on offer. There is no way that the skills this fellow wanted were so rare and difficult that only 1/500 candidates could possibly handle the job. Humans even in the 1/20 mark are pretty competent if you're willing to train them and legitimate geniuses crop up at around 1/200.

    • rotis 12 hours ago
      He writes 2,253 candidates and 2,069 were disqualified. 184 were qualified, so 1 in 12 was considered competent.
      • roenxi 10 hours ago
        Then he quotes 0.18% to show how rare a quality is, which is a wrong interpretation of the numbers. If he'd said 8% that would be realistic.
        • dublinstats 3 hours ago
          The number of actual openings is not given.

          Also the number who turned their offers down (and perhaps the number they disqualified due to being overqualified and too expensive).

          Ultimately kind of a meaningless metric.

  • pelasaco 1 hour ago
    i will never forget to program in ruby. neither c. The question is how usable it will be in the next years. I would love to be able to only code ruby forever. Everywhere.
  • dsign 13 hours ago
    This is some convoluted BS built on the premise that wars need to make sense, economically or otherwise. No, wars do not need to make sense. If a person, a dictator or a president, unilaterally starts a war that forfeits the lives of both the dictator's (possibly fabricated) enemies and its own people, that person is knowingly committing murder. Logically, such a person should be handled with at least as much prejudice as a lone wolf that opens fire on a crowd. So we need to fix our legal systems to be better at preventing wars, not our economic systems to be better at fighting them.
  • austin-cheney 8 hours ago
    The west hasn’t known how to write code for the 20 years I have been doing it, at least at major .com brands.

    It’s a 85/15 rule. These big companies hire hundreds, possibly thousands, of developers but most of them cannot code. Some of them struggle to write emails. About 15% of those people provide 85% of the value.

    Here is where it all went wrong. The goal of software, the only goal, is automation. That means eliminating human labor. The goal of these big companies is hiring, which is mostly the opposite of eliminating labor. That conflict results in people who cannot do the jobs they are hired to perform and whose goals are to retain employment in preference to automating anything.

    Worse still is that you can’t talk about if 85% of the people doing that work find this very subject completely hostile.

    It is difficult to get a man to understand something, when his salary depends on his not understanding it. Upton Sinclair.

  • pHequals7 9 hours ago
    thankfully with code and coding agents - the tacit/tribal knowledge always lives via the codebase itself unlike atoms based manufacturing processes..
  • booleandilemma 5 hours ago
    I'm not forgetting how to code. They're not gonna get me.

    Did I forget everyone's phone numbers when cell phones came out? Yes.

    But this is different. Coding is my passion. I was doing it before I got paid to do it and I'll be doing it after they no longer pay people to do it anymore.

  • komali2 4 hours ago
    If it's any consolation, this isn't unique to "the West," AI programming has completely taken over in the PRC as well.
  • j45 4 hours ago
    One thing you can't really rule out American ingenuity is deciding to do something.

    What America did with developing Shale Oil to become viable, so quickly is one example.

  • 01100011 6 hours ago
    Is this article even worth clicking on? The headline makes it sound like yet another pearl-clutching article extrapolating some trend to the extreme in divergence with reality.

    AI has been an effective coding tool for, what, 2 years at most? We've collectively forgotten all of our skills in those 2 years? Really?

    • threepts 5 hours ago
      It's a big strawman.

      "Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing"

      It passes the blame to AI for developers not learning because they are not being hired. You don't need to be hired to learn something. You need to learn something to get hired. It's opposite.

  • diogenescynic 4 hours ago
    We've forgotten how to do most basic things. Roads are paved terribly, food quality is equally gross, our colleges are diploma mills, homes are built like crap... Everything has steadily been going in the wrong direction my entire life. It feels like we're almost in a dark age where basic skills from a generation ago are being forgotten.
  • pwarner 7 hours ago
    Did we forget how to make things? I mean we stopped making some things, but US manufacturing output is higher than ever

    https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...

  • sorenjan 10 hours ago
    Frankly, I find the attitude towards AI coding here on HN to be both disappointing and a bit disgusting. Not long ago places like this where software developers gathered were full of various texts about how important it was to be able to reason about your code, how tech debt crept into your projects, and how skillful you had to be to write good software, various smart algorithmic tricks to squeeze more performance out of your hardware, etc.

    Now? Seems like code quality is outdated and uninteresting all of a sudden. Everything is about agentic coding, harnesses, paying hundreds of dollars to Anthropic to let their LLM do the coding for you or perhaps using a 128 GB Mac to run a local model. Do you know your code base? Doesn't matter, if there are any bugs in the future Claude will fix them! Tokenmaxxing is the new paradigm, who cares about the end result as long as it's runs for now and passes all (AI written) tests!

    But don't suggest these people shouldn't get $100k+ salaries, after all, they still "software engineers" in their minds, they're running the agent orchestration harness in the terminal after all, not everyone anywhere in the world could do that! They're special and deserve to be well compensated for their hard vibe coding work!

    This industry is rotting from the inside.

  • nobodyandproud 5 hours ago
    Leadership and management problem, exacerbated by the Chicago school.

    Good, knowledgable employees are not fungible. The in-house culture that built the engineering takes an entire generation to build.

    The winner-take-all MBA class of the 1980s to the 2000s and the congressional leadership developed during this era are squarely at fault and their policies need to be replaced.

  • muragekibicho 11 hours ago
    Odd anecdote. I completed high school in 2017 and my home country demanded us use mathematical tables, not calculators, to find logs and sines for our version of SAT math.

    I got my highest-paying numerical programming contract (in the US) just because I knew (from high school math table experience) how to use LUTs to calculate a lot of useful stuff i.e quarter squares.

    Modernization is great and all. However, it's disappointing to know lots of new programmers are oblivious of the fundamentals.

  • jongjong 10 hours ago
    This is why I advocate for making everything as simple as possible. The more complex the tech, the more likely it will be lost through the passing of time.

    It's kind of insane how much knowledge a human being needs to have to build certain technologies and it's taken for granted.

    AI might make the knowledge easier to acquire but it's still a lot of knowledge that people have to internalize.

  • arjunthazhath 13 hours ago
    Hope we dont forget humanity one day!
  • scotty79 5 hours ago
    I think comments on such posts have bimodal distribution. On one end there are people who see the utility of AI models for programming and are generally eager to see more capable models and ways of using them. On the other there are people who see AI destroying programming and have no idea how AI could change to be a force for good.

    I had idea what might be the difference between the groups. I think for the latter group the code is important part of the goal. They see software as rather ends than means. Not entirely of course.

    And the first group considers artifacts that the software produces to be the goal. So as long as AI written software is capable of producing valuable artifact they are willing and eager to go with it. And AI does that.

    If the result of my code is finetuning of a neural network, I don't really care how it happened. I can benchmark it afterwards and know if the code that AI made for this purpose was good or not. I can inspect the code, investigate it, pinpoint ideas I don't like, suggest some ideas to try that I believe could give better results. I can restart, or try doing same thing few times in parallel trying different harnesses and models. All in service of the result, that is not code.

    If you have a program that needs to do something and are willing to try AI to write it, think foremost about how you can rephrase the problem so that the output of AI written program becomes an artifact that can be independently verified, how to turn desired behavior into an artifact to evaluate.

  • zwischenzug 10 hours ago
    It's a great story, and a nicely written piece.

    But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.

    Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).

    It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.

    We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?

    We had better make sure we write down and preserve the recorded data though :)

  • wewxjfq 13 hours ago
    While the Fogbank story is a funny anecdote, I don't see it as a fitting example for atrophied skills. It's like writing a clean implementation of some software and it just doesn't match the legacy version until you realize that the legacy version had an unnoticed bug that made it behave the way it does.
  • trhway 13 hours ago
    Isn't that is the point of technological civilization development? People for example forgot how to weave on the handloom, or all the parts production and the maintenance for the watermills. And wooden sailships - top mastery of handling and engineering developed for millennia, gone.

    As it was said - the future is here, it just distributed non-uniformly, so somebody is still and will be for some time sailing, manufacturing things and writing code.

  • FrustratedMonky 6 hours ago
    "A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points."

    This is weird, but does seem a common result.

    -> AI generates a ton of code fast, but then the human takes a long time to review. Every time the prompt changes. The AI takes a few minutes to generate code that the human will take hour to review.

    The reviewing is taking longer than if human just did the code. So why is it so difficult to go back to coding instead of prompting.?

  • ktallett 13 hours ago
    We have both forgotten how to make things and also decided we can make more profit letting someone else make everything for every market. We have moved to a generation fixated on maximizing profit. However there is logic there as the cost to access the ability to make things is prohibitively expensive. As someone who makes open hardware with a nod to the environment and reusability, you can not justify or even find more locally sourced options than China.

    Coding is different though, coding doesn't have a cost barrier, it has a ability barrier. I think we will loose a lot of people who never were passionate about programming and perhaps go back to a happy equilibrium. AI is only production ready if you have someone who understands software development. AI will improve speed to market if you have the right team, it doesn't remove the need for some to learn to code. You will of course end up with startups using exclusively AI but they will be those who end up with major security breaches or simply cannot scale as the AI goes in the wrong direction for the future. Tbh that's probably a positive as it weeds out the start ups that are focused on buzzwords for funding and not product.

    • xantronix 12 hours ago
      No matter what happens to the viability of software development as a career, I will always care about the craft as I have done the past twenty years and change. The imperatives to adopt LLMs in situations where they do not benefit me nor my work is what is driving me away. I have to agree with latexr; the people who seem to benefit the most from the current moment are those who see software as a means to an end without much concern for quality, longevity, nor customer experience.

      Why is speed-to-market such an important metric? I do not understand the need to mimic the largest players in the industry, nor do I see any particularly profound long term benefits to first mover advantage.

    • latexr 13 hours ago
      > I think we will loose a lot of people who never were passionate about programming

      Anecdotally, what I’m seeing right now is the opposite. People who don’t care about programming are joining, while those who do care are getting tired of the bullshit and leaving. The good programmers are the ones leaving, the hacks are extremely happy to use LLMs.

      When shit hits the fan, there won’t be many people left to clean it.

      • trick-or-treat 12 hours ago
        So you see people who don't care about programming, joining and getting comfortable with vscode and claude code and devops?

        Because it seems to me like there's a lot of coding-adjacent things they still need to be able to do even if they never look at a line of code.

        • latexr 11 hours ago
          Those examples are nonsensical. None of those are necessary to get working code. The VSCode example is particularly baffling. Firstly, I’m sure you understand there are other editors people use for code; secondly, I know even people who don’t code who have picked up VSCode for text editing and are fine with it.
          • trick-or-treat 9 hours ago
            I think you haven't dealt with a lot of non-coders. 90% of the world will not be able to even open a .py
            • threepts 5 hours ago
              90 percent of the world doesn't need to be able to open a .py, when their 20 dollar a month agent does it for them.

              Just like how we don't need to be able to open our kernels and manually update the OS, the software company does it for us. It helps knowing the kernel, but you can still get the security updates even if you don't know how to.

            • latexr 9 hours ago
              > I think you haven't dealt with a lot of non-coders.

              And I think you should avoid making assumptions about people you know nothing about. That is so far from the truth it’s not even funny.

              > 90% of the world will not be able to even open a .py

              Which is nowhere near my argument. I’d appreciate if you engaged with what I said or not at all.

  • dana321 7 hours ago
    You are not using as much of your brain (solving problems, thinking etc.) so you end up in a cycle of becoming stupider and stupider the more you use ai, so eventually your prompts get worse and worse then you start asking "why is it not working as good as it was two weeks ago"
  • clutter55561 10 hours ago
    The same “forgetting pattern” can be said of assembly, hardware, combustion cars, radio, heck, even making fire.

    There will always be specialists who can really debug stuff. Mechanics, etc. Time moves on, and we need to move with it.

    I’m amazed at this “end-of-world” crap. People use AI to write this shit, to make it even crazier.

  • epolanski 7 hours ago
    Idk, if I look at major os software it looks to me like the west still holds a huge number of brilliant contributors.
  • avereveard 5 hours ago
    Why do millions need to code? When we industrialized million lost the knowledge to sewn a garment. Why a few specialized worker producing self building solution is that bad?

    I mean beyond the obvious hacker news bias.

    If you like it nobody will remove it to you as a hobby. But the artisanal aspect of coding as a production mechansim is dying, and it was about time.

  • khalic 10 hours ago
    We’ve been automating every single industry that we touched for decades without a single word, bringing up tepid responses like “it’s capitalism” or “business is business” when called out on it.

    But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.

    The hypocrisy is mind blowing

    • alansaber 9 hours ago
      I hope we'll still be able to sell our beautiful artisan chrome extensions in second hand flea markets in the future
  • croes 9 hours ago
    Just look how MS try to get rid of

    https://news.ycombinator.com/item?id=47881805

  • crabbone 11 hours ago
    The West started to forget how to code a long time before AI. At first, it was the work visas to bring programmers in, then it was outsourcing. At this point, I'm not even sure if AI is doing more harm than good in this department as it might be able to bring some jobs back to the "West", if it turns out that it's cheaper than outsourcing.

    The outsourcing was shedding more of the trivial jobs, while trying to keep key positions at home, but increasingly, it also started to lose the key positions too. It's possible that AI can make it so that the key positions will be harder to justify to outsource... but, who knows... maybe not.

  • ekianjo 12 hours ago
    > The defense industry thought peace would last forever, too.

    Not really since they are always pushing for more wars.

  • locallost 13 hours ago
    I can't not write the tired comment of how ridiculous it is to criticize AI and then use AI to write your article. It's tired, but so is this writing style.

    For the actual problem, I fear this can't be solved by warning people, the pain will need to be felt. The system we live in, basically free market capitalism, cannot do anything else except local optimization. Maybe it's for the best, I don't know. The alternative of top down planning wouldn't have this problem, but it would have other problems. I work for a mid size somewhat luxury brand, and the major goal right now is cost cutting and AI for efficiency everywhere instead of using it to create better products or better ways to reach out customers. When I think about who will buy our luxury products if all jobs were optimized out of existence, I don't have an answer, but again I think the pain will need to be felt to change course.

  • BrenBarn 13 hours ago
    > After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function.

    Same thing that happened to the unfortunate Dr. Jekyll!

  • rvz 13 hours ago
    This will end with the way of COBOL with a few people that still have the expert-level understanding of refactoring old code without causing outages or service disruption.

    We’ll see, but right now I now see developers 24/7 hooked onto their agents and in the future we will experience a de-skilling problem which clean code, best practices, security and avoiding NIH syndrome will be all flushed down the toilet.

  • throw4523ds 12 hours ago
    exactly, as they say everyone has to learn to code.
  • anovikov 9 hours ago
    Talking of military stuff: it's not a problem really. No one can keep the non-needed capacity in existence, it's not even possible if no one consumes the product. Make sufficient buffer stocks to have time to re-learn the process when needed, and that's it. There is no realistic way it could work any different, otherwise it's like: maintain entire Cold War era production capacity and keep it idle or working at 5% workload, just to be able to ramp up when needed? But it means keeping almost all of the Cold War budgets still flowing. Wasn't going to happen - and of course, in Russia it also didn't happen, and couldn't.

    In the end of the day, Russia burnt through their entire Soviet stocks in roughly 2-2.5 years, while US spent a very small proportion of theirs and Europe, maybe about half. And now consumption on both sides is similar with expenses on the Western side to feed that machine being almost invisibly small. Nothing bad happened.

  • nojvek 7 hours ago
    I wish this article wasn’t AI slop. It wasn’t X, it was Y.
  • immanuwell 13 hours ago
    when you offshore or automate away the hands-on knowledge, you don't just lose the workers, you lose the entire institutional memory, and no amount of money can buy that back overnight
  • light_hue_1 12 hours ago
    > The West Forgot How to Make Things. Now It's Forgetting How to Code

    Can we stop repeating this nonsense headline please? We did not stop manufacturing things.

    Manufacturing is a huge industry in the West. https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...

    The US manufacturing sector is the biggest it has ever been. Exports are at all time record highs. The only thing that declined about manufacturing is the jobs. We build way more than we ever did but with far fewer people.

    What we did do is decide that basic items aren't worth it. Our capacity is limited, our labor pool is limited, expenses are high, it doesn't make sense to make trinkets when we can make complex high precision parts and devices.

    But no, we did not forget how to make things. We chose to use our capacity in a smarter way.

  • notepad0x90 1 hour ago
    Opposite take here: the west is resisting AI too much, and not implementing guardrails to protect the public from human-hostile AI usage. I don't predict that will be its downfall (I think other factors have already started the process), but it will lead to losing a competitive edge over China and the rest of the world.

    Right now, silicon dominance is what's keeping silicon valley afloat. that and the power of the American consumer base. The world is having to adapt to not relying on the US for consumption due to tariffs among other things. Not only that, attempts to curb competition from China by restricting chip exports, and imports of their tech (I don't disagree in principle with either) has led them to be more self-reliant and invest more on domestic R&D.

    All this to say, there is no way around winning, and the fact of the competition is also real. You can't deny the competitive advantage proper use of LLMs brings. It's also hard to deny the destructive power of LLMs to societies.

    In China, companies are heavily regulated by the state. This means being competitive against the west is a state matter, it also means harming citizens is somewhat tolerated if the economic benefit to the whole country is good, but companies chasing their own profit at the expense of the public good isn't tolerated. I don't agree with their way of doing things, but the only thing limiting their victory over the west is their hesitation and intolerance to all things outside of the SE-Asian sphere of influence. But then again, the anti-migration trend of the US also removes that slight technical advantage the US always held.

    There are many problems that can't be solved by LLMs, and expecting developers' value to be the number of lines they type is silly. It doesn't matter so much if you use LLMs or don't use them, what matters is results. Westerners attitude in general is to resist LLMs. This is partly a result of (in my opinion), not realizing that there is non-western competition. It is absolutely possible to use LLMs to ship high-quality, performant and secure code, you just don't take the dumb approach of letting LLMs do everything and a human "reviews it"; how exactly depends on each development team and company.

    Keep in mind, that for decades outsourcing developers offshore -- where usually sub-par code is tolerated because of lower cost to ship -- has been a prevalent trend. If companies can't get Western devs to learn to use LLMs, then they can just ship it offshore to companies that do use it. That didn't lead to the west forgetting to code, and LLMs won't either.

    What will hopefully happen is you'll get less developers learning to code, which means the developers that do the work, will get paid better (it's been on the downturn) so long as they learn to sue LLMs.

    What people are having a hard time coping with, is the expectation of needing armies of developers to get things done being an antiquated concept. Computers, and then the internet have done this to many industries. You used to have lots of travel agents in the past, you still do, but very few.

    The bigger issue is refusal to learn from history. Concepts like capitalism, communism, market economy, centrally planned economy, etc.. are like half a century out of date. There was no "capitalism" 200 years ago (not in so many words at least). Economists and politicians aren't catching up to changes in technology. Historically, adapting to these changes has been brutal.

    I won't claim to predict what will happen, but one way or the other, LLMs won't go away in response to resistance from western workers, similar to how other changes in tech didn't go away like that. Economies will have to adapt or get decimated until they do. In the mean time, there is ample opportunity for the dominance of the west to fade within our lifetime, should that opportunity be taken advantage of by the competition. If China starts being less dependent on local companies, and starts importing a lot more, they can displace US and EU consumption needs, and perhaps even force the west to be producers for their domestic demand. unregulated western companies (from Coca cola to Disney!) have been trying to achieve just that, because of the large earning potential in China. But again, China could take advantage of all that, they could have more influence over the west, but they're too inward thinking. They're so afraid of relying on a hostile west, they're preventing the west from becoming reliant on them completely. But this new image of an ineffective and declining US/West, and perhaps some success over Taiwan, and establishing a solid non-western global trade economy can give them that extra confidence?

  • _the_inflator 8 hours ago
    That's why I am looking forward to be a 70 year old demanding tons of money for doing the things I came to love and was cut off by AI.

    What a bright future!

    But the rest is a big no from my side.

    "In hindsight" - Southpark, please take over.

    What if there was a continuation of producing unused weapons during the last 20 years? "Waste of money", "Old tech", "useless" - dilemma.

    Also the generalization is awfully misleading: "The west".

    Let's say all are suffering from military dementia the same way. Who do think has an easier time to recover? USA or Europe? Europe relied and relies or freeloads on USA in especially military affairs.

    As you wrote: some veterans teach building, handling cruse missiles to young guns like having an exciting time with the boy scouts.

    Germany? "Never again! Demilitarize Germany." Decades long hatred towards USA was pretty much summed up with the slur "Ami go home!" which was a phrase used to protest US military bases in Germany - and then, when most of them finally left, it was all just fun and games (losers).

    So USA has some sort of infrastructure and intellectual property to recover and never stopped treasure it as part of the country's history: Veterans' Days, Unknown Soldier, Arlington - Hegseth did a great job stopping the decline here.

    Meanwhile Europe: You couldn't have a hold out in secrecy. Some enquete commission would investigate, and addresses would be leaked and people doxed.

    Have a look at the representatives of the Germany Army: overweight nice guys. Sorry to say, but I think there is something wrong with this picture.

    Europe has nothing to restart. They never had in the first place. Many tend to forget that the US provided massive supply to all allies during WW2. Russia would have been wiped out if it wasn't for the US logistics and money. After the war there was a joke told by survivors of the Eastern front: The first Sherman got shot on the Eastern front not the West.

    Europe was always on life support. France military forces outnumbered Germany at the start of WW2. But they were tired and instead of fighting build a wall so to say. Netherlands and Denmark was taken without any resistance.

    And it is the same for programming. How many European companies dominate globally like FAANG? Exactly. None. 30 years of Internet and it is getting lonely at the top for the US.

    "The West"? Nope.

    During the 80th, while Chucky Cheese was all the rage, in Germany you got massively socially ostracised for showing your interest in computers. Playing electronic handhelds put you up on notice by teachers, demanding correction by the school administration - true stories.

    Another one: What do all FAANG like companies have in common? The founders and top managers have a background in CS. What do European managers have in common? They haven't heard of CS so far.

    Europe is a mess. US is maybe having a cold start but gets its shit done.

    Germany killed of its industrial sector. Energy producers as well. Germany does what Morgentau had in mind but what off the table: no more wars and weapons, just farmers and horses.

    USA is save in every regard. It is not that something has been lost. This happens or why do we don't know anything about Rome?

    You have to distinguish recovering from losing. Once you were at the top, at least you know how to get there while others in most cases will never get there.

    These are different abilities: conserving knowledge and rebuilding it. USA needs to reactivate, while Europe needs to build from the ground up without any starting point - without money, energy, moral support, nothing.

    USA is already the winner here. And this pattern keeps repeating. 250 years and what we have is an epoch were USA saw kingdoms rise and fall, USA is the only constant there is.

    Treasure it. You are in a save spot despite all the dire circumstances. A blessing in disguise.

  • aboardRat4 11 hours ago
    >Denis Stetskov

    Putin's propagandist, or just useful idiot.

  • mfgadv99 9 hours ago
    [dead]
  • sailpvp998 12 hours ago
    [dead]
  • subwatch_dev 9 hours ago
    [dead]
  • Rahil_Jain 3 hours ago
    [dead]
  • faidit 6 hours ago
    They "forgot" how to make them? The greatest superpower in the world? Isn't it obvious that "Stingers" were a giant hoax and we never went to Afghanistan? Maybe the US needs to go "back" there just to prove we can.. /s
  • marsven_422 12 hours ago
    [dead]
  • black_13 10 hours ago
    [dead]
  • scotty79 9 hours ago
    [dead]
  • sieabahlpark 11 hours ago
    [dead]
  • brrraaah 13 hours ago
    [flagged]
  • shevy-java 13 hours ago
    > I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end.

    With all due respect, but many european taxpayers help pay for Ukraine. I am not disagreeing on the premise of the West killing itself via systematic recessions - Trump invading Iran leading to inflation as an example - so a lot of things are going on that show a ton of incompetency both in the USA and the EU, but at the same time I also get question marks in my eyes when this criticism comes from a country that receives money from others. That money could instead go to make EU countries more competitive, for instance. I am not saying this should necessarily be the case, mind you; I fully understand the nature of Putin's imperialism. But we need to really consider all factors when it comes to strategic mistakes with regards to production - and that includes taking up debts all the time. There are always a few who benefit in war, just as they benefit from subsidies from taxpayers (inside and outside as well).

    • skhr0680 13 hours ago
      Ukraine is "receiving money from others"? We are benefactors of the Ukrainians' bravery and sacrifices. How much money could we have not spent if Hitler had been stopped in Czechoslovakia?
      • crotobloste 13 hours ago
        > Ukraine is "receiving money from others"?

        Factually correct.

        > We are benefactors of the Ukrainians' bravery and sacrifices.

        Who's we?

        > How much money could we have not spent if Hitler had been stopped in Czechoslovakia?

        Very different situation, in all aspects.

        • collinfunk 13 hours ago
          You see zero similarities between Hitler invading Poland and Putin invading Ukraine?
          • brabel 9 hours ago
            As similar as any country who ever invaded any other country?!
          • roenxi 13 hours ago
            There are some pretty substantial differences. Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance. They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable. Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

            Hitler was more about wanting more land and resources for Germany, and he saw war as being a legitimate tool for achieving his aims that he deployed early and enthusiastically.

            • defrost 12 hours ago
              NATO has advanced into which part of Russia?
              • roenxi 12 hours ago
                • rcxdude 9 hours ago
                  Eastern Europe is not Russia and Russia does not automatically get a say in what Eastern Europe does because they are nearby. Russia seems to believe it is entitled to a sphere of influence. That the US does a milder version of what they're doing (which is also wrong) doesn't make their approach OK (or even effective).
                • defrost 11 hours ago
                  So, NATO hasn't advanced into Russia then?

                  Just Russia advancing into the Ukraine (after promising not to if the USSR nukes were given to Russia)?

                  Gotcha.

                  • brabel 9 hours ago
                    I know it’s what about ism but I really hope you apply the same logic when Cuba once more tried to enter an alliance with Russia or China to defend itself against a larger aggressor next door. So while I agree that Russia should allow Ukraine and Georgia to join NATO, I also think that’s only fair if countries like Brazil, Cuba and Venezuela are freely allowed to determine their futures by joining Russia, China and Iran military alliances. But you and I know that’s not going to happen. So please let’s stop pretending we don’t have double standards.
                    • defrost 9 hours ago
                      As you've chosen to address me directly I'll reply honestly, I have zero concern about Cuba, Venezuela, any of the 190+ countries on the planet, wanting to join or form BRICs.

                      I have considerably more concern about the ability of a post MAGA USofA to successfully navigate such a world via soft power as they appear to have flushed all the competent diplomatic talent down a golden toilet.

                      • justsomehnguy 2 hours ago
                        > I have zero concern about Cuba, Venezuela, any of the 190+ countries on the planet

                        But somehow you are extremely concerned about one country which is on the other side of planet of you.

            • wiseowise 12 hours ago
              > Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance. They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable. Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

              Is that why Russians rejected negotiations when Ukraine offered to never join NATO and Russians insist on keeping invaded territories?

            • collinfunk 12 hours ago
              > There are some pretty substantial differences. Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance.

              His rationale for invading Ukraine was to "demilitarise and denazify" it. The NATO point seems largely be invented by people who dislike NATO in the west.

              > They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable.

              I hope the "tension" you are referring to was not the little green men taking over Crimea and the Donbas in 2014.

              > Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

              This is a totally unseriousness statement. Can you remind me what Putin was doing in Syria again?

              • roenxi 11 hours ago
                There's an english transcript [0] of his speech from when they went in up on the Kremin website. He opened with something like

                > I will begin with what I said in my address on February 21, 2022. I spoke about our biggest concerns and worries, and about the fundamental threats which irresponsible Western politicians created for Russia consistently, rudely and unceremoniously from year to year. I am referring to the eastward expansion of NATO, which is moving its military infrastructure ever closer to the Russian border.

                They're claiming the NATO thing is relevant. Opening paragraph justification.

                [0] http://en.kremlin.ru/events/president/transcripts/67843

            • 8954789543547 12 hours ago
              [flagged]
      • gib444 13 hours ago
        > Ukraine is "receiving money from others"?

        Yes. https://www.eeas.europa.eu/delegations/united-states-america...

        • latexr 13 hours ago
          You are completely ignoring the argument of your parent comment. They are saying that money is being spent to the benefit and best interest of the spenders, that it’s not a handout.

          You are, of course, free to disagree and make your point, but ignoring the argument does not advance the discussion.

  • dev_l1x_be 12 hours ago
    As an anecdotal evidence I code way more now with agents because i have an entity who has vast amount of knowledge about pretty much everything and I have the creativity to use that well.
    • bit1993 12 hours ago
      But you already knew how to code before LLM coding agents, juniors will jump straight into using agents without learning to code by hand, hence the premise of the article.
  • lava_pidgeon 13 hours ago
    Rather bad premise in the article. 1.) Germany, Italy and Eastern Europe are very industrial regions. The author forgets defence is not only the industry. 2.) The author doesn't show any source that Chinese developers don't use AI
  • whatever1 13 hours ago
    I don’t know, but the evidence shows that software engineering is not that deep of an art.

    People come and go at rates that would not be sustainable in any manufacturing business.

    • heisenbit 12 hours ago
      Yes, businesses tend to believe that.

      No, every time people switch knowledge gets lost and code quality degrades.

      In part I blame accounting rules justifying investments is easier than maintenance.

    • wg0 9 hours ago
      Interesting take. We are not going to talk about Office, Windows, Adobe or Autodesk products here. Neither Linux kernel.

      Just classified ads or e-commerce platforms such as gumroad and shopify are complex enough that a single person cannot master them end to end. The domain is huge to master and takes a lots of time to master.

      • whatever1 1 hour ago
        Have you ever seen a tech company calling a 65y.o. retired wizard to debug a system failure? I doubt it.

        In manufacturing it so regular, that typically senior technical people retire as soon as possible to form their consulting firms and charge much higher rates, just by selling their multi-decade expertise back to their company.

        In oil & gas, there are consulting firms that their role is to just store and provide domain knowledge to companies who lost their experts.

        In tech, consulting firms provide cheap labor.

  • throwaway2037 10 hours ago
    Click/rage bait?

    The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)

    There are plenty of NLAWs (much cheaper than Javelin, and only slightly less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.

    And now to give credit where credit is due:

    Looking at his (assumed) LinkedIn profile: https://www.linkedin.com/in/denjkestetskov/

    It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I too would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.

    As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...

    • InkCanon 10 hours ago
      The Stinger is an anti air weapon, the Javelin is an anti tank weapon.
    • sounddetective 10 hours ago
      So you published this comment with an anti-Ukrainian spin, and just 2 minutes after posting, your comment is already at the top of comment rankings? I hope HN mods follow inauthentic upvote / comment behaviour on this site; this looks fishy.
      • SyneRyder 10 hours ago
        New comments get posted to the top for visibility. The 2 minutes is the key point here. If the comment doesn't get enough upvotes it will sink down, like it has now about 30 minutes later.
    • numpad0 7 hours ago
      Stingers use that gas cooled spinny thingy. They're not FLIR based like 9X are.