5 comments

  • cjfd 3 hours ago
    The article talks about 'software development will be democratized' but the current LLM hype is quite the opposite. The LLMs are owned by large companies and are quite impossible to train by any individual, if only because of energy costs. The situation where I am typing my code on my linux machine is much more democratic.
    • Havoc 2 hours ago
      It is democratising from the perspective of non-programmers- they can now make their own tools.

      What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers

      • ares623 1 hour ago
        They can rent their own tools, more like.
      • cyanydeez 1 hour ago
        The people taking the lead in most of Ai in America are bootlickers of fascism. So not much difference than China on a long enough time line.
  • bananaflag 2 hours ago
    Yeah but this time it's for real.

    All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.

    AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.

    This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.

    Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.

    I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.

    • danhau 1 hour ago
      Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.

      How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.

      • cafebabbe 52 minutes ago
        AI is useful when paired with an experienced programmer.

        Experienced through old-school (pre-LLM) practice.

        I don't clearly see a good endgame for this.

    • quotemstr 1 hour ago
      The thing about talking to computers is less the formality and more the specificity. People don't know what they want. To use an LLM effectively, you need to think about what you want with enough clarity to ask for it and check that you're getting it. That LLMs accept your wishes in the form of natural language instead of something with a LALR(1) grammar doesn't magically obviate the need for specificity and clarity in communication.
      • bananaflag 45 minutes ago
        Agree that one needs clarity, but how does that differ from my example with the manager and the engineer? The manager also (ideally) learns in time that, when they are more clear, the engineer does the work better.
  • ryanjshaw 2 hours ago
    Until a year ago I believed as the author did. Then LLMs got to the point where they sit in meetings like I do, make notes like I do, have a memory like I do, and their context window is expanding.

    Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.

    It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.

    It won’t be much longer after that when they will start to tech lead those same teams.

    • Roark66 1 hour ago
      After 2 years of using all of these tools (Claude C, Gemini cli, opencode with all models available) I can tell you it is a huge enabler, but you have to provide these "expert guardrails" by monitoring every single deliverable.

      For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.

      Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.

      So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.

      • ryanjshaw 1 hour ago
        Yup. I’m not expert so maybe I’m completely off base, but if I were OpenAI or Anthropic I’d likely just hire 1000 highly skilled engineers across multiple disciplines, tell them to build something in their domain of expertise, then critique the model’s output, iteratively work on guardrails for a month or two until the model one-shots the problem, and package that into the new release.
  • helsinkiandrew 1 hour ago
    I'd say that the article left out Software Reuse - talked a lot more about in the late 90's early 00's than now.

    You could argue that coding with LLM's is a form of software reuse, that removes some of its disadvantages.

  • Havoc 2 hours ago
    History reviews is not a great way to approach ground breaking tech
    • elcapitan 1 hour ago
      "Not learning from history because the present is the present" is a pretty accurate description of the world in 2026, at least.
    • forgetfreeman 1 hour ago
      We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.