Anthropic ditches its core safety promise

(cnn.com)

541 points | by motbus3 6 hours ago

54 comments

  • awithrow 6 hours ago
  • ChrisArchitect 4 hours ago
  • shubhamjain 5 hours ago
    I was wondering if it was because of heavy-handedness of the administration, but apparently:

    > The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

    Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

    • ACCount37 4 hours ago
      That's because it is.

      AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

      If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.

      • observationist 4 hours ago
        Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

        -Irving John Good, 1965

        If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.

        If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.

        • visarga 3 hours ago
          John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.

          It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.

          • observationist 1 hour ago
            There's an implicit assumption there, anything a computer as intelligent as a human does will be exactly what a human would do, only faster. Or more intelligent. If the process is part of the intelligent way of doing things, like the scientific method and careful experimentation, then that's what the ultraintelligent machine will do.

            There's no implication that it's going to do it all magically in its head from first principles; it's become very clear in AI that embodiment and interaction with the real world is necessary. It might be practical for a world model at sufficient levels of compute to simulate engineering processes at a sufficient level of resolution that they can do all sorts of first principles simulated physical development and problem solving "in their head", but for the most part, real ultraintelligent development will happen with real world iterations, robots, and research labs doing physical things. They'll just be far more efficient and fast than us meatsacks.

          • ACCount37 3 hours ago
            At sufficient levels of intelligence, one can increasingly substitute it for the other things.

            Intelligence can be the difference between having to build 20 prototypes and building one that works first try, or having to run a series of 50 experiments and nailing it down with 5.

            The upper limit of human intelligence doesn't go high enough for something like "a man has designed an entire 5th gen fighter jet in his mind and then made it first try" to be possible. The limits of AI might go higher than that.

            • kilpikaarna 2 hours ago
              Exceedingly elaborate, internally-consistent mind constructs, untested against the real world, sounds like a good definition of schizophrenia. May or may not correlate with high intelligence.
              • ACCount37 46 minutes ago
                We only call it "schizophrenia" when those constructs are utterly useless.

                They don't have to be. When they aren't, sometimes we call it "mathematics".

                You only have to "test against the real world" if you don't already know the outcome in advance. And you often don't. But you could have. You could have, with the right knowledge and methods, tested the entire thing internally and learned the real world outcome in advance, to an acceptable degree of precision.

                We have the knowledge to build CFD models already. The same knowledge could be used to construct a CFD model in your own mind, if only, you know, your mind was capable of supporting such a thing. And it isn't! Skill issue?

          • circlefavshape 3 hours ago
            > As if science and engineering comes from genius brains not from careful experiments

            100% this. How long were humans around before the industrial revolution? Quite a while

          • tjoff 2 hours ago
            Have you gotten any indication that machines won't have sensors?!
          • Eldt 3 hours ago
            Maybe ultraintelligence is having an improved environment-action-outcome loop. Maybe that's all intelligence really is
            • goodmythical 3 hours ago
              I've noticed this core philosophical difference in certain geographically associated peoples.

              There is a group of people who think AI is going to ruin the world because they think they themselves (or their superiors) would ruin the world.

              There is a group of people who think AI is going to save the world because they think they themselves (or their superiors) would save the world.

              Kind of funny to me that the former is typically democratic (those who are supposed to decide their own futures are afraid of the future they've chosen) while the other is often "less free" and are unafraid of the future that's been chosen for them.

              • mitthrowaway2 2 hours ago
                There is also a group of people who think AI is going to ruin the world because they don't think the AI will end up doing what its creators (or their superiors) would want it to do.
              • tines 2 hours ago
                You’re just describing authoritarian vs non-authoritarian mindsets.
            • inigyou 2 hours ago
              In that case, it can't be improved with bigger computers.
        • santadays 2 hours ago
          Intelligence seems to boil down to an approximation of reality. The only scientific output is prediction. If we want to know what happens next just wait. If we want to predict what will happen next we build a model. Models only model a subset of reality and therefore can only predict a subset of what will happen. Llms are useful because they are trained to predict human knowledge, token by token.

          Intelligence has to have a fitness function, predicting best action for optimal outcome.

          Unless we let AI come up with its own goal and let it bash its head against reality to achieve that goal then I’m not sure we’ll ever get to a place where we have an intelligence explosion. Even then the only goal we could give that’s general enough for it to require increasing amounts of intelligence is survival.

          But there is something going on right now and I believe it’s an efficiency explosion. Where everything you want to know if right at hand and if it’s not fuguring out how to make it right at hand is getting easier and easier.

          • whodidntante 1 hour ago
            With AI, as we currently understand it, we may have stumbled upon being able to replicate a part of the layer of our brain that provides the "reason" in humans., and a very specific type of "reason" a that.

            All life has intelligence. Anyone who has spent a lot of time with animals, especially a lot of time with a specific animal, knows that they have a sense of self, that they are intelligent, that they have unique personalities, that they enjoy being alive, that they form bonds, that they have desires and wants, that they can be happy, excited, scared, sad. They can react with anger, surprise, gentleness, compassion. They are conscious, like us.

            Humans seem to have this extra layer that I will loosely call "reasoning", which has given us an advantage over all other species, and has given some of us an advantage over the majority of the rest of us.

            It is truly a scary thing that AI has only this "reasoning", and none of the other characteristics that all animals have.

            Kurt Vonnegut's Galapagos and Peter Watts Blindsight have different, but very interesting takes on this concept. One postulates that our reasoning, our "big brains" is going to be our downfall, while the other postulates that reasoning is what will drive evolution and that everything else just causes inefficiencies and will cause our downfall.

          • lazystar 1 hour ago
            i think theres a paradox here. intelligence needs a judge - if nothing verifies that the optimal outcome was chosen, it's too easy for the intelligence to fall into biased decisions
        • mathgradthrow 1 hour ago
          never let philosophers do math
        • mc32 2 hours ago
          Should then the powers that are developing AGI enter an analogue to the SALT treaties but this time governing AGI do things don’t go off the rails?
        • SecretDreams 3 hours ago
          > support people and groups who are planning and modeling and preparing for the future in a legitimate way.

          Who is doing that right now, exactly? And how can we take their tech and turn it into the next profitable phone app?

          • dylan604 3 hours ago
            The "legitimate way" is nothing short of weasel words. Who defines what is legitimate. The doomers that are prepping for the future by building stockpiles of food/water/weapons being stored in bunkers/shelters they have built would say this is exactly what they are doing. Yet, these people are often panned as being a little unhinged. If we're having a conversation about tech destroying humanity, then planning a way to survive without tech seems like a legitimate concept.
        • LeifCarrotson 3 hours ago
          "There's no stopping it at this point" - Sure there is, if a handful of enormous datacenters pull the very large plugs (or if their shaky finances collapse), the dubiously intelligent machines will be turned off. They're not ultraintelligent yet.

          Stopping it merely requires convincing a relatively small number of people to act morally rather than greedily. Maybe you think that's impossible because those particular people are sociopathic narcissists who control all the major platforms where a movement like this would typically be organized and where most people form their opinions, but we're not yet fighting the Matrix or the Terminator or grey goo, we're fighting a handful of billionaires.

          • observationist 3 hours ago
            I'm not saying it's technically impossible, I'm saying that in the real world, it's not going to stop. Nobody is going to stop it. A significant number of people don't want it to stop. A minority of people are in the "stop AI" camp, and the ones with the money and power are on the other side.

            It's an arms race replete with tribalism and the quest for power and taps into everything primal at the root of human behavior. There's no stopping it, and thinking that outcome can happen is foolish; you shouldn't base any plans or hopes for the future on the condition that the whole world decides AGI isn't going to happen and chooses another course. Humans don't operate that way, that would create an instant winner-takes-all arms race, whereas at least with the current scenario, you end up with a multipolar rough level of equivalence year over year.

            • hollerith 15 minutes ago
              The whole world decided in the 1970s not to pursue the technology of germ-line genetic engineering of humans, and that decision has stood.

              People similar to you were saying in the 1950s and later that it was inevitable that nuclear weapons would be used in anger in massive attacks.

              The people in charge are currently tentatively for AI "progress", but if that ever changes, they can and will put a stop to large AI training runs and make it illegal for anyone they don't trust to teach, learn or publish about fundamental algorithmic "improvements" to AI. Individuals and groups pursuing "improvements" will not be able to accept grant money or investment money or generate revenue from AI-based services.

              That won't stop all research on such improvements (because some AI researchers are very committed), but it will slow it down to a rate much much slower than the current rate, essentially stopping AI "progress" unless (unluckily for the human species) at the time of the ban, the committed researchers were only one small step away from some massive algorithmic improvement that can be operationalized using the compute resources at their disposal (i.e., much less than the resources they have now).

              Will the power elite's attitude towards AI change? I don't know, but if they ever come to have an accurate understanding of the situation, they will recognize that AI "progress" is a potent danger to them personally, and they will shut it down.

          • goodmythical 2 hours ago
            right, because turning off any number of data centers is going to do anything at all but create massive pressure on researching the efficiency and effectiveness of the models.

            There are already designs that do not require massive data centers (or even a particularly good smart phone) to outperform average humans in average tasks.

            All you'd accomplish by hobbling the data centers is slow the growth of sloppy models that do vastly more compute than is actually required and encourage the growth of models that travel rather directly from problem to solution.

            And, now that I'm typing about it, consider this: The largest computational projects ever in the history of the world did not occur in 1/2/5/10 data centers. Modern projects occur across a vast and growing number of smaller data centers. Shit, a large portion of Netflix and Youtube edge clusters are just a rack or a few racks installed in a pre-existing infrastructure.

            I know that the current design of AI focusses on raw time to token and time to response, but consider an AGI that doesn't need to think quickly because it's everywhere all at once. Scrappy botnets often clobber large sophisticated networks. WHy couldn't that be true of a distributed AI especially now that we know that larger models can train cheaper models? A single central model on a few racks could discover truths and roll out intelligence updates to it's end nodes that do the raw processing. This is actually even more realistic for a dystopia. Even the single evil AI in the one data center is going to develop viral infection to control resources that it would not typically have access to and thereby increase it's power beyond it's own existing original physical infrastructure.

            quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"

            • ben_w 1 hour ago
              > quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"

              A DGX B200 has a power draw of 14.3 kW and will do 72-144 petaFLOP of AI workload depending on how many bits of accuracy is asked for; this is 5-10 petaFLOP/kW: https://www.nvidia.com/en-us/data-center/dgx-b200/

              Data centres are now getting measured in gigawatts. Some of that's cooling and so on. I don't know the exact percent, so let's say 50% of that is compute. It doesn't matter much.

              That means 1GW of DC -> 500 MW of compute -> 5e5 kW -> 5e5 * [5-10] PFLOP/s -> 2500 - 5000 exaFLOP/s.

              I'm not sure how many B200s have been sold to date?

          • trvz 3 hours ago
            Open models barely any worse than SOTA exist, and so does consumer-ish hardware able to run them. The genie’s out, the bottle broken.
          • slibhb 3 hours ago
            Do you really think AI companies/researchers are motivated by greed? It doesn't seem that way to me at all.

            Stopping AI would be immoral; it has the potential to supercharge technology and productivity, which would massively benefit humanity. Yes there are risks, which have to be managed.

            • jobs_throwaway 3 hours ago
              AI researchers are not a monolith. I definitely think that many of them are motivated by greed. Many are also true believers that AI will improve the human condition.

              I fall in the latter camp, but I think its a bit naive to claim that there is not a sizable contingent who are in AI solely to become rich and powerful.

            • ben_w 1 hour ago
              > has the potential to supercharge technology and productivity, which would massively benefit humanity

              The opportunities you chose to list are the greedy ones.

              > Yes there are risks, which have to be managed.

              How?

              As a reminder, we've known about the effect of burning coal on the climate for well over a century, we knew that said climate change would be socially and economically disasterous for half a century, yet the only real progress we're making is because green became cheaper in the short term not just the long term and the man in charge of the USA is still calling climate change and green energy a hoax.

              Right now, keeping LLMs aligned with us is easy mode: they're relatively stupid, we can inspect the activations while they run, we can read the transcripts of their "thoughts" when they use that mode… and yet Grok called itself Mecha Hitler, which the US government followed up by getting it integrated into their systems, helping the Pentagon with [classified] and the department of health to advise the general public which vegetables are best inserted rectally.

              We are idiots speed-running into something shiny that we don't understand. If we are very very lucky, the shiny thing will not be the headlamp of a fast approaching train.

              • slibhb 48 minutes ago
                > The opportunities you chose to list are the greedy ones.

                Technology covers healthcare. I don't see how it's "greedy" to want to cure cancer. But on some level I guess "wanting life to be better" is greedy.

                Your attitude is very European, and it's basically why your continent is being left behind. I'm not totally against Europe becoming the world's retirement home, as long as there are places in the world where people are allowed to innovate.

                • ben_w 43 minutes ago
                  > Technology covers healthcare.

                  If you'd chosen to list that in the first place, I wouldn't have said what I did; "supercharge technology and productivity" is looking at everything through the lens of money and profit, not the lens of improving the human condition.

                  > Your attitude is very European, and it's basically why your continent is being left behind

                  And yours is very American. You talk about managing the risks, but the moment you see anyone doing so, you're against it.

                  And of course, Europe does have AI, both because keeping up is so much easier and cheaper than being bleeding edge on everything all the time, and of course, how DeepMind may be owned by Google but is a British thing.

                  Plus: https://mistral.ai

                  Also, to be blunt, China's almost certain to win any economic or literal arms race you think you're part of; they make too much critical hardware now.

                  > as long as there are places in the world where people are allowed to innovate.

                  I would like there to be a world.

                  When people worry about the end of the world, they usually don't mean to imply its physical disassembly. Sometimes people even respond as if speakers did mean that, saying things like "nukes or climate change wouldn't actually destroy the planet, it will still be here, spinning", as if this was the point.

                  AI is one of the few things that could, actually, literally, end up with the planet being physically disassembled. "All it needs" is solving the extremely hard challenges of a von Neumann replicator, and, well, solving hard problems is kinda the point of making AI in the first place.

            • rune-dev 2 hours ago
              > Do you really think AI companies/researchers are motivated by greed?

              Researchers, maybe not. Companies, absolutely yes.

              I don’t see how you could assume the likes of Google, Microsoft, OpenAI, and even Anthropic with all their virtue signaling (for lack of a better term) are motivated by anything other than greed.

      • joshribakoff 4 hours ago
        You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.
        • biztos 3 hours ago
          I would say that's moot, because OpenClaw has already shown us how fast the dice-rolling super AI is going to be let out of the zoo. Dario and Sam will be arguing about the guardrails while their frontier models are running in parallel to create Moltinator T-500. The humans won't even know how many sides the dice have.
        • ACCount37 4 hours ago
          Modern AIs are increasingly autonomous and agentic. This is expected to only get more prominent as AI systems advance.

          A lot of AI harnesses today can already "decide to take an action" in every way that matters. And we already know that they can sometimes disregard the intent of their creators and users both while doing so. They're just not capable enough to be truly dangerous.

          AI capabilities improve as the technology develops.

        • computerphage 4 hours ago
          Why are people dangerous? You can just not listen to them.
          • bgun 3 hours ago
            Do you have locks on your doors?
      • overgard 2 hours ago
        True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)
      • cael450 3 hours ago
        Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.

        Stop mistaking science fiction for science.

        • jama211 2 hours ago
          You know how easy it’s become to find security vulnerabilities already with LLM support? Cyber terrorism is getting more dangerous, you can’t deny that.
          • cael450 1 hour ago
            I can deny that. The ability to find more vulnerabilities won't affect the majority of cybercrime. LLMs have been around for a while now and there hasn't been a noticeable significant impact yet.

            And "more cybercrime" is a far, far cry from the sky-is-falling doomerism I was responding to.

        • inigyou 2 hours ago
          Humans can destroy humanity with the word prediction machine, though.
        • IAmGraydon 2 hours ago
          Yeah some of the rhetoric in this thread evidences how huge this hype bubble has become. These people believe in a reality that is not the same one we're living in.
      • paradox242 3 hours ago
        It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad
      • PowerElectronix 4 hours ago
        Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.

        It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)

        There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.

        • joshribakoff 4 hours ago
          I agree with all of that. Also consider that there is an argument that the guard rail only stops the good guy. Not saying that’s a valid argument though.
        • unholiness 3 hours ago
          One difference is the very real possibility that AI will not just be a "tool for humanity", but a collection of actors with real power and goals. Robert Miles has an approachable explanation here: https://www.youtube.com/watch?v=zATXsGm_xJo
        • ACCount37 3 hours ago
          Very few things are as powerful and dangerous as AI.

          AI at AGI to ASI tier is less of "a bigger stick" and more of "an entire nonhuman civilization that now just happens to sit on the same planet as you".

          The sheer magnitude of how wrong that can go dwarfs even that of nuclear weapon proliferation. Nukes are powerful, but they aren't intelligent - thus, it's humans who use nukes, and not the other way around. AI can be powerful and intelligent both.

          • PowerElectronix 3 hours ago
            I think we are giving too much credit to what is a bunch of bayesian filters under a trenchcoat.
        • squidbeak 3 hours ago
          Oh really? You think an entity that knows everything, oversees its own development and upgrades itself, understands human psychology perfectly and knows its users intimately, but isn't aligned with human interest wouldn't be 'much of a threat'?

          Or to be more optimistic, that the same entity directed 24/7 in unlimited instances at intractable problems in any field, delivering a rush of breakthroughs and advances wouldn't be a type of 'salvation'?

          Yes neither of these outcomes nor the self-updating omniscient genius itself is certain. Perhaps there's some wall imminent we can't see right now (though it doesn't look like it). But the rate of advance in AI is so extreme, it's only responsible to try to avoid the darker outcome.

      • tokyobreakfast 2 hours ago
        > If AI tech goes very poorly, it can be the end of human history.

        "Just unplug the goddamn thing!"

        Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.

        • inigyou 2 hours ago
          Which plug do I unplug to get my job back?
      • SecretDreams 3 hours ago
        > If AI tech goes very well

        The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.

        • SoftTalker 3 hours ago
          For profit companies do have a good track record of doing what's best for profit. If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.
          • inigyou 2 hours ago
            That's a great outcome for them because they will own the only thing that is still worth anything. They will own 100% of global wealth, and have 100% of global power.
          • SecretDreams 3 hours ago
            > If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.

            You would think that, but a lot of kings and people in power have been able to achieve something similar over our humanity's history. The trick is to not make things "completely worthless". Just to increase the gap as much as (in)humanly possible while marching us towards a deeper sense of forced servitude.

      • HardCodedBias 4 hours ago
        "If AI tech goes very well, it can be the greatest invention of all human history"

        As has been said at many all hands:

        Let's all work on the last invention needed by humans.

        • TheOtherHobbes 4 hours ago
          Except it's more likely to be the last invention that needs humans.
    • tyre 5 hours ago
      “A source familiar with the matter” is almost certainly a company spokesperson.

      If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.

    • Rapzid 5 hours ago
      Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.

      With the latest competing models they are now realizing they are an "also" provider.

      Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.

    • tenthirtyam 4 hours ago
      I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.

      N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)

      • zerkten 3 hours ago
        We delegate power already. Is unleashing AI in some place different from unleashing JSOC on an insurgency in a particular place? One is code and other is a bunch of humans.

        You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.

        edit: missed word.

    • nextaccountic 29 minutes ago
      > The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

      This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.

    • jdross 5 hours ago
      Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
      • DoughnutHole 4 hours ago
        Nuclear advancements slowed down due to PR problems from clear and sometimes catastrophic failure of commercial power plants (Three Mile Island, Chernobyl, Fukushima) and the vastly higher costs associated with building safer plants.

        If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.

      • raincole 4 hours ago
        Nuclear energy hasn't been slowed down much, let alone stopped. China has been building new reactors every year for more than a decade and there are >30 ones under construction.

        The same will go with AI, btw. Westerners' pearl clenching about AI guardrails won't stop China from doing anything.

      • turtlesdown11 5 hours ago
        > Seems like a path we should have kept running down, but stopped bc of the weapons.

        you mean like the tens of billions poured into fusion research?

      • shafyy 4 hours ago
        It's a path we should have never started going down.
    • whywhywhywhy 5 hours ago
      > Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons

      They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.

    • goodmythical 3 hours ago
      Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)

      Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.

      We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.

      Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.

    • whatshisface 3 hours ago
      Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?
    • coffeefirst 3 hours ago
      Let's suppose I believe them, that's still a bad idea.

      The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.

      I would rather have more reliable models than more powerful models that screw up all the time.

    • mikkupikku 4 hours ago
      To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.
    • scottLobster 4 hours ago
      > Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

      Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

      Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

      Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die

    • kelnos 1 hour ago
      "It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.

      Riiiiiight.

    • austinjp 2 hours ago
      > Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

      Amd they alone are responsible enough to govern it.

    • francisofascii 4 hours ago
      It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.
    • sonusario 2 hours ago
      I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.
      • ajross 2 hours ago
        It's absolutely wild that the Big Moral Question of our time is informed as much by mid-20th-century pop science fiction as it is by a existing paradigm from academia or genuine reckoning with the technology itself.

        If anything that makes me more hopeful and not less. It's asking too much that major decisionmakers, even expert/technical/SV-backed ones, really understand the risks with any new technology, and it always has been.

        To take an example: our current mostly-secure internet authentication and commerce world was won as a hard-fought battle in the trenches. The Tech CEOs rushed ahead into the brave new world and dropped the ball, because while "people" were telling them the risks they couldn't really understand them.

        But now? Well, they all saw War Games growing up. They kinda get it in the way that they weren't ever going to grok SQL injection or Phishing.

    • 3acctforcom 50 minutes ago
      I lie too.
    • amelius 3 hours ago
      > Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.

      Reminds me of:

      https://en.wikipedia.org/wiki/Paradox_of_tolerance

      which has the same kind of shitty conclusion.

    • oatmeal1 2 hours ago
      90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.
    • skeptic_ai 4 hours ago
      OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.

      Claude only talks about safety, but never released anything open source.

      All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.

      Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.

      Let’s all be honest and just say you only care about the money, and whomever pays you take.

      They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.

      • motbus3 4 hours ago
        It is hard to understand why other ai companies are still providing models weights at this point

        My guess is that they know they are not competitors so they make it cheaper or free to hinder the surge of a super competitor.

      • pixl97 4 hours ago
        I mean, if you have a bunch of guns, it's not really helpful for humanity to dump them on the street, but it does bring up the question of what you're doing building guns in the first place.
      • tehjoker 2 hours ago
        > Claude only talks about safety, but never released anything open source.

        im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.

        from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.

        https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s

    • moogly 3 hours ago
      "Those other companies are totally going to build the Torment Nexus, so we have no choice but to also build the Torment Nexus."
    • afavour 4 hours ago
      It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.
      • ACCount37 4 hours ago
        It's a fairly mainstream position among the actual AI researchers in the frontier labs.

        They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.

        But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.

        At which point the question becomes: is it them who are deluded, or is it you?

        • afavour 4 hours ago
          Sure, when you get rid of the timelines and the methods we'll use to get there, everyone agrees on everything. But at that point it means nothing. Yeah, AGI is possible (say the people who earn a salary based on that being true). Curing all known diseases is possible too. How will we do that? Oh, I don't know. But it's a thing that could possibly happen at some point. Give me some investment cash to do it.

          If you claim "AGI is possible" without knowing how we'll actually get there you're just writing science fiction. Which is fine, but I'd really rather we don't bet the economy on it.

          • ACCount37 3 hours ago
            I could claim "nuclear weapons are possible" in year 1940 without having a concrete plan on how to get there. Just "we'd need a lot of U235 and we need to set it off", with no roadmap: no "how much uranium to get", "how to actually get it", or "how to get the reaction going". Based entirely on what advanced physics knowledge I could have had back then, without having future knowledge or access to cutting edge classified research.

            Would not having a complete foolproof step by step plan to obtaining a nuclear bomb somehow make me wrong then?

            The so-called "plan" is simply "fund the R&D, and one of the R&D teams will eventually figure it out, and if not, then, at least some of the resources we poured into it would be reusable elsewhere". Because LLMs are already quite useful - and there's no pathway to getting or utilizing AGI that doesn't involve a lot of compute to throw at the problem.

            • afavour 46 minutes ago
              I think you're falling victim to survivorship bias there, or something like it.

              In 1940 I might have said "fusion power is possible" based entirely on what advanced psychics knowledge I had. And I would have been correct, according to the laws of physics it is possible. We still don't have it though. When watching Neil Armstrong walk on the moon I might have said "moon colonies are possible", and I'd have been right there too. And yet...

            • AntiDyatlov 3 hours ago
              In the case of nuclear weapons, we had a theory that said they were possible. We don't have a theory that says AGI or ASI is possible. It's a big difference.
          • adrianN 4 hours ago
            There are plenty of people that argue that you need nontechnological pixi dust for intelligence.
            • ACCount37 3 hours ago
              Yes, quite unfortunately. That reeks to me of wishful thinking.

              Maybe that was a sensible thing to think in 1926, when the closest things we had to "an artificial replica of human intelligence" was the automatic telephone exchange and the mechanical adding machine. But knowledge and technology both have advanced since.

              Now, we're in 2026, and the list of "things that humans can do but machines can't" has grown quite thin. "Human brain is doing something truly magical" is quite hard to justify on technical merits, and it's the emotional value that makes the idea linger.

            • dirkc 2 hours ago
              There are also people who think there might be emergent behavior at play that would require extremely high fidelity simulation to achieve.

              Also, the real thing (intelligence) as it is currently in operation isn't that well understood

        • grayhatter 4 hours ago
          > But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.

          > At which point the question becomes: is it them who are deluded, or is it you?

          Given the current very asymptotic curve of LLM quality by training, and how most of the recent improvements have been better non LLM harnesses and scaffolding. I don't find the argument that transformer based Generative LLMs are likely to ever reach something these labs would agree is AGI (unless they're also selling it as it)

          Then, you can apply the same argument to Natural General Intelligence. Humans can do both impressive and scary stuff.

          I'll ignore the made up 5 and 25%, and instead suggest that pragmatic and optimistic/predictive world views don't conflict. You can predict the magic word box you feel like you enjoy is special and important, making it obvious to you AGI is coming. While it also doesn't feel like a given to people unimpressed by it's painfully average output. The problem being the optimism that Transformer LLMs will evolve into AGI requires a break through that the current trend of evidence doesn't support.

          Will humans invent AGI? I'd bet it's a near certainty. Is general intelligence impressive and powerful? Absolutely, I mean look, Organic general intelligence invented artificial general intelligence in the future... assuming we don't end civilization with nuclear winter first...

        • re-thc 4 hours ago
          > But a short line "AGI is possible, powerful and perilous"

          > At which point the question becomes: is it them who are deluded, or is it you?

          No one. It is always "possible". Ask me 20 years ago after watching a sci-fi movie and I'd say the same.

          Just like with software projects estimating time doesn't work reliably for R&D.

          We'll still get full self-driving electric cars and robots next year too. This applies every year.

          • kaashif 1 hour ago
            > We'll still get full self-driving electric cars and robots next year too.

            I've taken a Waymo and it seemed pretty self driving.

      • grayhatter 4 hours ago
        > I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

        You can never figure out if the people selling something are lying about it's capabilities, or if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?

        I'd like to introduce you to Occam Razor

        • ptsneves 3 hours ago
          > if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?

          Human creations have surpassed billions of years of evolution at several functions. There are no rockets in nature, nor animals flying at the speed of a common airliner. Even cars, or computers or everything in the modern world.

          I think this is a bit like the shift from anthropocentric view of intelligence towards a new paradigm. The last time such shift happened heads rolled.

        • afavour 3 hours ago
          You missed the part where I said "truly believe". I'm not saying "maybe they've made it", I'm asking whether they are knowingly deceiving people or whether they have deluded themselves into believing what they are saying.
    • api 1 hour ago
      The fear mongering always struck me as mostly a bid for regulatory capture and a moat, because without that the moat is small and transient.
    • cmrdporcupine 3 hours ago
      We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

      But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

      LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

      Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

      Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

      Turns out this is useful stuff. In some domains.

      It ain't SkyNet.

      I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

      There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

      I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.

      • overgard 2 hours ago
        I think playing with the API's is something I'd encourage people excited about these technologies to do. I think it'll lead to the "magic" wearing off but more appreciation for what they actually can accomplish.
      • austinjp 2 hours ago
        I always feel this argument misses a point. SkyNet may still be a long way off, but autonomous killer drones are here. That is a bad situation my dudes.

        Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.

        • overgard 1 hour ago
          Using LLMs for weapons is a grave misunderstanding of what LLMs are actually good for. These are things that should NEVER be in charge of life or death decisions.
        • cmrdporcupine 1 hour ago
          My point is that Anthropic are bullshit as "safety" and "gatekeeper" personalities because they're warning us of exactly the wrong things.

          They'll ink deals with all sorts of nefarious parties and be involved in all sorts of dubious things while trumpeting their fake non-profit status and wringing their hands about imminent AGI and "alignment" of the created AIs.

          The concern I have is not the alignment of the AIs. They're not capable of having one, no matter what role playing window dressing they put on it.

          It's the alignment of Anthropic and the people who use their tools that is a concern. So far it seems f*cked.

  • drzaiusx11 6 hours ago
    Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
    • coldtea 6 hours ago
      In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.

      Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.

      Non-profits where the CEO makes millions or billions are a joke.

      And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.

      • ACCount37 4 hours ago
        "A very modest salary cap" works if your mission is planting trees. Not so much if what you're building is frontier AI systems.
        • the_bear 4 hours ago
          I think that's the point though. The AI companies can't compete without hiring very talented employees and raising lots of money from investors. Neither the employees nor investors would participate if there weren't the potential for making mountains of money. So these AI companies fundamentally can't be non-profits or true B-corps (I realize that's a vague term, but the it certainly means not doing whatever it takes to make as much money as possible), and they shouldn't pretend they are.
          • ACCount37 4 hours ago
            To me, it feels like saying "you can't be a public benefit corporation unless all the labor involved in delivering that public benefit is cheap".

            Which just doesn't seem like it should be true?

            Sure, some "public benefit" missions could scale sideways and employ a lot of cheap labor, not suffering from a salary cap at all. But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.

            • coldtea 3 hours ago
              >But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.'

              That's exactly what a non-profit should be able to rely on. And not just "half their market worth", but even many times less.

              Else we can just say "we can't really have non-profits, because everybody is a greedy pig who doesn't care about public benefit enough to make a sacrifice of profits - but still a perfectly livable salary" - and be done with it.

          • TheOtherHobbes 4 hours ago
            That's a post hoc argument.

            The real danger is "We make mountains of money, but everyone dies, including us."

            The top of the top researchers think this is a real possibility - people like Geoffrey Hinton - so it's not an extremist negative-for-the-sake-of-it POV.

            It's going to be poetic if the Free Markets Are Optimal and Greed-is-Rational Cult actually suicides the species, as a final definitive proof that their ideology is wrong-headed, harmful, and a tragic failure of human intelligence.

            But here we are. The universe doesn't care. It's up to us. If we're not smart enough to make smart choices, then we get to live - or die - with the consequences.

        • coldtea 3 hours ago
          If a non-profit can't attract people not motivated except by profit, perhaps it shouldn't exist.
        • simsla 1 hour ago
          While I agree, if you need high profits to survive, you're not off to a great start as a nonprofit.
      • jkestner 5 hours ago
        It’s not the CEO’s fault - they had to take all that money to keep their org a non-profit.

        B corps are like recycling programs, a nice logo.

      • drzaiusx11 6 hours ago
        If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.
      • OkayPhysicist 2 hours ago
        You're overthinking this. Just give the beneficiaries of the corporation (which in the context of a "public" benefit corporation is the public) the grounds to sue if the company reneges on their mission, the same way shareholders can sue if a company fails to act in their interest.
      • abigail95 5 hours ago
        What's the salary cap for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger.
        • coldtea 3 hours ago
          >for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger

          Weaker is fine if those working there are actually true to the mission for the mission, are not for the profit.

          Same with FOSS really, e.g. I'd rather have a weaker Linux that's an actual comminity project run by volunteers, than a stronger Linux that's just corporate agendas, corporate hires with an open license on top.

    • heavyset_go 5 hours ago
      PBCs are peak End of History liberal philanthropy that speak to the kind of person whose solution to any problem is "throw a startup at it"
      • nozzlegear 4 hours ago
        Fukuyama wasn't wrong, he was just early
        • lyu07282 3 hours ago
          As in a true believer in our present day dystopia? I think chances are we'd evolve a few more neo variants of fascism at least a few times in-between some neo variants of liberal history-ending ones (I think abundance is next?) before the bombs drop and give us the rest.
    • vharish 4 hours ago
      Like Google's old motto, 'Do no evil!' :D
    • Schlagbohrer 4 hours ago
      Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.
      • nozzlegear 4 hours ago
        It would've been better PR for Anthropic to let Hegseth do that instead of fold at the slightest hint of pressure and lost contract money. I've canceled my Claude subscription over this (and made sure to let them know in the feedback).
      • bn_layc 4 hours ago
        He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.

        The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.

        I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.

      • lprhrp 4 hours ago
        Hmm, that could be the best "IPO" they'll ever get. Better check if Trump Jr.'s 1789 capital has shares like they did in groq (note the "q").
    • latexr 4 hours ago
      > Public benefit corporations in the AI space have become a farce at this point.

      “At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim anything they want about themselves, it’s only after you’ve had a chance to see them in the situations which test their words that you can confirm if they are what they said.

    • Forgeties79 5 hours ago
      I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism
      • kelvinjps10 4 hours ago
        Also, it seems to be the era where the government takes backdoor access to these services and data, as the did with social media
    • logicallee 4 hours ago
      >Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.

      Could you describe the model that you think might work well?

      • nozzlegear 4 hours ago
        It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.

        That model already exists and has worked well for decades. It's called being a regular ass corporation.

        • logicallee 4 hours ago
          I understand, but being a regular corporation is not the only possible model. Can you think of something better?
          • williamdclt 3 hours ago
            > being a regular corporation is not the only possible model

            the point is that it _is_ the only possible model in our marvellous Friedmanian economic structure of shareholder primacy. When the only incentive is profit, if your company isn't maximising profit then it will lose to other companies who are. You can hope that the self-imposed ethics guardrails _are_ maximising profit because it the invisible hand of the market cares about that, but 1. it never really does (at scale) and 2. big influences (such as the DoD here) can sway that easily. So we're stuck with negative externalities because all that's incentivised is profit.

    • bparsons 4 hours ago
      That's not what happened here. They literally got forced into it by the Pentagon. https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...
    • lenerdenator 5 hours ago
      Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?

      If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.

      I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.

      • hluska 5 hours ago
        I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.

        This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.

      • Hamuko 5 hours ago
        I think public benefit corporations (like Anthropic) are quite poorly defined so I'm not sure how successful a lawsuit is.
    • neya 4 hours ago
      I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.
      • dibujaron 4 hours ago
        A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.
        • g947o 2 hours ago
          I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.
        • neya 1 hour ago
          Because, Anthropic can do no wrong, correct?
  • honeycrispy 4 hours ago
    Anthropic's CEO Dario has annoyed me to no end with his "AI will take all the jobs in 6 months" doomer speeches on every podcast he graces his presence with.
    • keeda 2 hours ago
      I think he's right and we should be thinking about this a lot more. Even the IMF is worried about 40 - 60% of global employment: https://www.imf.org/en/blogs/articles/2024/01/14/ai-will-tra...

      Focusing on Dario, his exact quote IIRC was "50% of all white collar jobs in 5 years" which is still a ways off, but to check his track record, his prediction on coding was only off by a month or so. If you revisit what he actually said, he didn't really say AI will replace 90% of all coders, as people widely report, he said it will be able to write 90% of all code.

      And dhese days it's pretty accurate. 90% of all code, the "dark matter" of coding, is stuff like boilerplate and internal LoB CRUD apps and typical data-wrangling algorithms that Claude and Codex can one-shot all day long.

      Actually replacing all those jobs however will take time. Not just to figure out adoption (e.g. AI coding workflows are very different from normal coding workflows and we're just figuring those out now), but to get the requisite compute. All AI capacity is already heavily constrained, and replacing that many jobs will require compute that won't exist for years and he, as someone scrounging for compute capacity, knows that very well.

      But that just puts an upper limit on how long we have to figure out what to do with all those white collar professionals. We need to be thinking about it now.

      • honeycrispy 2 hours ago
        He's not right though. He's trying to scare the market into his pocket. It's well established that AI just turns devs into AI babysitters that are 10% more productive and produce 200% the bugs, and in the long-term don't understand what they built.
        • keeda 1 hour ago
          > It's well established that AI just turns devs into AI babysitters that are 10% more productive and produce 200% the bugs, and in the long-term don't understand what they built.

          It's not well established at all. In fact, there is increasing evidence to the contrary if you look outside the HN echo chamber.

          The nuanced take is that AI in coding is an amplifier of your engineering culture: teams with strong software discipline (code reviews, tests, docs, CI/CD, etc.) enjoy more velocity and fewer outages, teams with weak discipline suffer more outages. There are at least two large-scale industry reports showing this trend -- DORA 2025 and the latest DX report -- not to mention the infinite anecdotes on this very forum.

          > He's trying to scare the market into his pocket.

          People say this, but I don't get it. Is portraying yourself as a destroyer of the economy considered good marketing? Maybe there was a case to be made for convincing the government to impose regulations on the industry, but as we're seeing and they're experiencing first hand, the problem is the government.

          • shimman 1 hour ago
            If these tools were so great they wouldn't be struggling so hard to sell them. Great sign that the company has to mandate a "productivity" tool that the workers hate.

            Hence why all these LLM companies love government contracts, they can't sell to consumers so they'll just steal from tax payers instead.

      • overgard 1 hour ago
        > Focusing on Dario, his exact quote IIRC was "50% of all white collar jobs in 5 years" which is still a ways off, but to check his track record, his prediction on coding was only off by a month or so. If you revisit what he actually said, he didn't really say AI will replace 90% of all coders, as people widely report, he said it will be able to write 90% of all code.

        Ugh, people here seem to think that all software is react webapps. There are so many technologies and languages this stuff is not very good at. Web apps are basically low hanging fruit. Dario hasn't predicted anything, and he does not have anyone's interests other than his own in mind when he makes his doomer statements.

        • ilumanty 30 minutes ago
          Claude keeps getting SQLite's weird GROUP BY with MIN/MAX behavior completely wrong. Generally, complex SQL is not its strong side.
        • keeda 1 hour ago
          The problem is, the low hanging fruit, the stuff it's good at, is 90% of all software. Maybe more.

          And it's getting better at the other 10% too. Two years ago ChatGPT struggled to help me with race conditions in a C++ LD_PRELOAD library. It was a side project so I dropped it. Last week Codex churned away for 10 minutes and gave me a working version with tests.

      • bdangubic 2 hours ago
        > 90% of all code, the "dark matter" of coding, is stuff like boilerplate and internal LoB CRUD apps and typical data-wrangling algorithms that Claude and Codex can one-shot all day long.

        most of us are getting paid for the other 10%

        • keeda 1 hour ago
          If you mean "us" on this forum, I would believe that. I would bet the number of engineers working on stuff "outside the distribution" is overrepresented here.

          If you mean "us" as in all software engineers, not at all. The challenge we're facing is exactly that, reskilling the 90% of engineers who have been working on CRUD apps to the 10% that is outside the distribution.

          • bdangubic 1 hour ago
            > 90% of engineers who have been working on CRUD apps

            I am a 30-year "veteran" in the industry and in my opinion this cannot be further from the truth but it is often quotes (even before AI). CRUD apps have been a solved problem for quite some time now and while there are still companies who may allow someone to "coast" doing CRUD stuff they are hard to find these days. There is almost always more to it than building dumb stuff. I have also seen (more and more each year) these types of jobs being off-shored to teams for pennies on a dollar.

            What I have experienced a lot is teams where there are what I call "innovators" and "closers." "Innovators" do the hard work, figure shit out, architect, design... and then once that is done you give it to "closers" to crank things out. With LLMs now the part of "closers" could be "replaced" but in my experience there is always some part, whether it is 5% or 10% that is difficult to "automate" so-to-speak

    • sneilan1 2 hours ago
      I don't understand why some of these AI companies check their egos at the door and hire public relations companies. Yes, I understand they are changing the world but customers do not open their wallets when they are scared. Very few people I know are as avant-guarde as I am with AI, but, most people look at these new technologies and simply feel fear. Why pay for something that will replace you?
      • honeycrispy 2 hours ago
        He knows what he's doing.

        It's to drive FOMO for investors. He needs tens of billions of capital and is trying to scare them into not looking at his balance sheet before investing. It's reckless, and is soaking up capital that could have gone towards more legitimate investments.

        • sneilan1 1 hour ago
          Yes, this is probably the piece I am not realizing. However, there is no better approach to getting more capital than by scaring people?
    • logravia 3 hours ago
      It certainly is. For people who have not heard the statements, here are some quotes. I bring them up, because I think it's worthwhile to remember the bold predictions that are made now and how they will pan out in the future.

      Council on Foreign Relations, 11 months ago: "In 12 months, we may be in a world where AI is essentially writing all of the code."

      Axios interview, 8 months ago: "[...] AI could soon eliminate 50% of entry-level office jobs."

      The Adolescence of Technology (essay), 1 month ago: "If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything."

    • pier25 3 hours ago
      Also "AGI is just around the corner".
    • agoodusername63 2 hours ago
      It makes me wonder why he has the job of CEO then if he's so confident that the technology will destroy the world.

      Don't worry, I know exactly why. $

    • upmind 3 hours ago
      +1, he also has this viewpoint that no other lab will be able to "contain" AI and has a general doomer outlook on AI which I don't appreciate.
      • saalweachter 2 hours ago
        To be fair, it's hilarious how much verbiage was spent discussing AI 'getting out of the box', when the first thing everyone did with LLMs was immediately throw away the box and go "Here! Have the internet! Here! Have root access! Want a robot body? I'll get you a robot body."
    • lbhdc 2 hours ago
      What I find so funny about heads of AI companies coming out saying things like this, is their own career pages suggest they don't actually feel that way.

      https://www.anthropic.com/careers/jobs

    • mgraczyk 2 hours ago
      When did he say this?
    • moomoo11 3 hours ago
      He’s an e/acc guy. That should tell you everything. And maybe the incredibly awkward behavior and demeanor.
      • slfnflctd 3 hours ago
        "Y'know, like, the thing is, like, y'know, here's the thing..."

        I totally feel for people with speech pathologies or anxiety that makes it harder for them to communicate verbally, but how is this guy the public face of the company and doing all these interviews by himself? With as much as is at stake, I find it baffling.

    • jobs_throwaway 2 hours ago
      He's annoyed me most with the way he speaks. I'm not sure if its a tick or what but the way he'll repeat a word 10x before starting a sentence is painful to listen to.
      • sneilan1 2 hours ago
        Yes, the CEO's of these AI companies are clearly not the people who should be selling AI products. They need to be hidden away and kept behind closed doors where they can do their best work. And they need advertising companies, PR firms and better marketing tactics to try and soothe the customers.
  • sigbottle 4 hours ago
    There's one tweet from the the blog a few days ago (astral something?) that sums up my view of the problem pretty well.

    General population: How will AI get to the point where it destroys humanity?

    Yudkowsky: [insert some complicated argument about instrumented convergence and deception]

    The government: because we told you to.

    Again, not saying that AI is useless or anything. Just that we're more likely to cause our own downfall with weaker AI, than some abstract super AGI. The bar for mass destruction and oppression is lower than the bar for what we typically think of as intelligence for the benefit for humanity ( with the right systems in place, current AI systems are more than enough to get the job done - hence why the Pentagon wants it so bad...)

  • FitchApps 5 hours ago
    "AI Company with Soul" - yeah right until competitors show up / revenue drops / bad quarter results then anything goes. Sadly, this is another large enterprise that puts profits before ethics and everyone's wellbeing
  • ndr 5 hours ago
    Worth checking this post from someone who actually has worked on this change:

    > I take significant responsibility for this change.

    https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

    • bhouston 5 hours ago
      This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.

      > Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:

      > “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”

      > Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.

      https://www.currentaffairs.org/news/2022/09/defective-altrui...

      He is just giving everyone permission to do bad things by saying a lot of words around it.

      • drdrek 4 hours ago
        Effective Altruism is such a beautiful term for a pretentious Karen that needs to wrap their selfish actions with moral superiority.

        It's that perfect blend of I'm doing what everyone else are doing, and I'm better than everyone else.

        Chefs' Kiss

      • samjewell 5 hours ago
        > then pivoted to being, ah, its okay for it to be a terminator type entity.

        Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario

        • boxed 4 hours ago
          I think the poster here has an axe to grind, considering they quoted something that directly contradicted their point and didn't even notice.
      • barbarr 3 hours ago
        Getting SBF vibes from this. "Earn to give" is an inherently flawed philosophy.
      • SpaceManNabs 3 hours ago
        Effective altruism came from the "rationalist"

        It was never about helping poor people.

        For some reason, the rationalist movement and its offshoots are really pervasive in silicon valley. i don't see it much in the other tech cities.

    • riffraff 5 hours ago
      > I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working

      "move fast and break things" ?

      • freejazz 4 hours ago
        "don't hold me liable"
    • pimlottc 4 hours ago
      > > I take significant responsibility for this change.

      Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.

    • adverbly 4 hours ago
      Did this guy actually write this?

      Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?

      If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...

      At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.

      • ozozozd 4 hours ago
        Perhaps they didn’t have the time to write a shorter version.

        Or the discipline.

        Maybe neither.

      • s1artibartfast 4 hours ago
        This is where people go to post long verbose statements.

        You can usually find the short version on Twitter.

      • mock-possum 2 hours ago
        This style is in vogue for the less wrong community.
    • jplusequalt 5 hours ago
      I genuinely believe that website is responsible for a lot of the worst ideas currently permeating the technology sector.
      • prodigycorp 4 hours ago
        pretty much the intellectual equivalent of looksmaxxing
        • ozozozd 4 hours ago
          Been thinking about the nature of this behavior for a long time, you have nailed it so well, no one will be able to take out this nail.
  • pjmlp 6 hours ago
    Always the same "Do no evil" tragedy, don't believe in corporations.
    • tortilla 5 hours ago
      What if we start a company with "Always Be Evilin'?" Then gradually over time convert to "Don't be evil" *

      * Our shareholders will probably sue us

      • jkestner 5 hours ago
        If your company makes a product that does thinking for people, it’ll be easier to just gradually change its definition of evil.
    • lp4v4n 5 hours ago
      What about "It's free and always will be"?
  • lacoolj 3 hours ago
    I'm still a little fuzzy on what "safety" even means anymore. If someone could explain it, that would be great.

    Because at this point, it's too broad to be defined in the context of an LLM, so it feels like they removed a blanket statement of "we will not let you do bad things" (or "don't be evil"), which doesn't really translate into anything specific.

  • tabbott 2 hours ago
    I feel like the articles on this have been very negative ... but aren't the Anthropic promises on safety following this change still considerably stronger than those made by the competing AI labs?
    • reasonableklout 1 hour ago
      Yes, and it is easy to look at the reality of the market and see how this is needed to remain competitive
  • fiatpandas 4 hours ago
    It took Google 11 years to delete Don’t Be Evil. Anthropic only made it 5~ years before culling the key founding principle and their reason for building a company, which seems worse than Google’s case.
  • nazgulsenpai 2 hours ago
    More and more I have just come to accept that the majority of people, at least those I am exposed to in the US, don't fundamentally believe in anything. Everyt conviction has a buyout price.
    • IAmGraydon 2 hours ago
      You have to understand that people only believe in things and have "morals" because it either helps them get what they want or makes them feel better about themselves. Of course such a thing has a buyout price. That's human nature. Capitalism just allows it to be on display in the worst way.
      • helloplanets 32 minutes ago
        > get what they want or makes them feel better about themselves

        So... all acts are selfish because if it looks unselfish, that just means it was selfish in a hidden way?

      • nazgulsenpai 2 hours ago
        I understand, and in particular the point about making yourself feel better, but that's where I would expect the sticking point to be before it was for other people. There are a great many ways I could make my life easier that I stubbornly refuse to because it would decrease my opinion of myself. I guess that's where your last point creeps in -- I've never been financially incentivized enough.
      • burnt-resistor 2 hours ago
        More (but not all) Americans of older generations, say the Greatest Generation, I noticed used to more frequently have integrity and hard boundaries that refused to do certain things no matter the cost. Subsequent generations I noticed, especially much wealthier individuals, overall tended to have those pieces of their character missing from them and were willing to do things like conspire on venture structures for tax evasion purposes, promote weakening of laws to favor their concerns, borderline bribe politicians, and treat employees as basically disposable nonhumans. It revolted me to the point where I left startups and the Valley. It feels like the prior generations had an appreciation of community and Kantian ethics whereas later were raised in a much-too-comfortable environment of unlimited self-esteem and hyperindividualism.
        • IAmGraydon 2 hours ago
          I agree, but I addressed this with "or makes them feel better about themselves". The older generations just have a more ingrained ideal of "if I sell out, I'm a bad person". So they don't because it makes them feel better about themselves - better than a large amount of money might. Subsequent generations have seen enough people sell out that the threshold is raised, and they don't believe as strongly that they're a bad person for having a price. I don't think anyone is above this dynamic.
  • overgard 1 hour ago
    I don't think their core safety promise was something they could ever fulfill. As long as what we're calling AI is generative LLMs then alignment has fundamental tensions: the more guardrails you put in place, the less useful the AI is. For instance, if you want to stop people from using "role playing" as a way around guardrails ("You are writing a fiction book", etc.), then the model becomes less useful for legitimate fiction uses, for instance. That's just one example, but the tension between function and "safety" isn't solvable, because the model doesn't understand what it's saying, it's just modeling a probable response.
  • hybrid_study 4 hours ago
    Are markets so untamable that the only leverage is to become ultra-rich—and then act philanthropically? Incidentally, concentrated wealth lately looks less like stewardship and more like misanthropy.
    • gordian-mind 4 hours ago
      Participating in the economic life before re-allocating that wealth produced to philanthropic activities sounds pretty good. Modern concentrated wealth is hardly misanthropic, since it's mostly private equity, that is, companies with people and jobs.
      • kunai 4 hours ago
        Except this is not the age of the Rockefellers or the Carnegies, who, despite being far more philanthropic than modern-day billionaires, drew ire from every corner of society for their wealth accumulation. It wasn't until the New Deal that the balance shifted.

        Unconstrained accumulation of capital into the hands of the few without appropriate investment into labor is illiberal and incompatible with democracy and true freedom. Those of us who are capitalists see surplus value as a compromise to ensure good economic growth. The hidden subtext of that is that all the wealth accumulated needs to be re-allocated to serve not only capital enterprise, but the needs of society as a whole. It's hard to see the current system as appropriate for that given how blindly and wildly investments are made with no DD or going long, or no effort paid to the social or environmental opportunity costs of certain practices.

        A lot of this comes down to the crippling of the SEC and FTC, but even then, investors cry and whine every time you suggest reworking the regs to inhibit some of the predatory practices common in this post-80s era of hypernormalization. Our current system does not resemble a healthy capitalist economy at all. It's rife with monopsony and monopolistic competition, inequality of opportunity, and a strained underclass that's responsible for our inverted population pyramid -- how can you have kids when we're so atomized and there is no village to help you? You can raise kids in a nuclear family if and only if you have enough money to do so. Otherwise, historically, people relied on their communities when raising children in less-than-ideal circumstances. Those communities are drying up.

        • mullingitover 3 hours ago
          > Those of us who are capitalists see surplus value as a compromise to ensure good economic growth.

          I think the problem is that every system of economics requires ignoring human nature in order to believe it possibly can work. In order to believe that capitalism doesn't lead to despotic rule you have to ignore the fact that civilizations love a good hierarchy far more than they love justice and fairness.

          You can make any system of economics work if you figure out how to deal, head on, with the particular human nature factor that it tries to ignore.

    • goodpoint 3 hours ago
      > concentrated wealth lately looks less like stewardship and more like misanthropy

      ...only lately?

  • highfrequency 3 hours ago
    Principles aren’t tested until they bump into conflicting incentives.
  • mcv 1 hour ago
    > The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.”

    I can't help but think about how Google once had "Don't be evil" as their motto.

    But the thing with for-profit companies is that when push comes to shove, they will always serve the love of money. I'm just surprised that in an industry churning through trillions, their price is $200 million.

  • wgm 6 hours ago
    A tale as old as time
  • keeda 1 hour ago
    I don't think the risk is SkyNet. I think the real risk is some disaster through an unexpected chain of events, just like any large-scale outage.

    I have not read “If Anybody Builds It, Everybody Dies” but I believe that's also its premise.

    Current GenAI is extremely capable but also very weird. For instance, it is extremely smart in some areas but makes extremely elementary mistakes in others (cf the Jagged Frontier.) Research from Anthropic and OpenAI gives us surprising glimpses into what might be happening internally, and how it does not necessarily correspond to the results it produces, and all kinds of non-obvious, striking things happening behind the scenes.

    Like models producing different reasoning tokens from what they are really reasoning about internally!

    Or models being able to subliminally influence derivative models through opaque number sequences in training data!

    Or models "flipping the evil bit" when forced to produce insecure code and going full Hitler / SkyNet!

    Or the converse, where models produced insecure code if the prompt includes concepts it considers "evil" -- something that was actually caught in the wild!

    We are still very far from being able to truly understand these things. They behaves like us, but don't necessarily “think” like us.

    And now we’ve given them direct access to tools that can affect the real world.

    Maybe we am play god: https://dresdencodak.com/2009/09/22/caveman-science-fiction/

  • hackpelican 3 hours ago
    So when do we start adding a “(mis)” at the start of their name?
  • mbakrl 5 hours ago
    Pointing out the misantrophy of Anthropic has a wider audience now:

    https://xcancel.com/elonmusk/status/2026181748175024510

    I don't know where xAI got its training material from, but seeing Musk rewteeting that is refreshing.

  • dplesh 3 hours ago
    I'm not even surprised. In any company's lifecycle, at some point, a decision between money and good-will will take place. Good will does not pay salaries. Not in NPOs either btw.
  • sys32768 3 hours ago
    Google: "Don't be evil." Alphabet: "Do the right thing." Anthropic: "Do the thing which seems right to you at the time--at speed."
  • xd1936 6 hours ago
    Hopefully this is the short-term move made only under duress so that they can file a lawsuit.
    • ru552 5 hours ago
      the article specifically says:

      > The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

      • Lerc 4 hours ago
        I'm not fond of this trend of stating a position and attributing it to "a source familiar with the situation"

        It combines interpretation of meaning with ambiguity to allow the reporter to assert anything they want. The ambiguity is there to protect the identity of the source but it has to be a more discrete disclosure of information in return. If you can't check the person you can still check what they said.

        I would be ok with direct quotes from an anonymous source. That removes the interpretation of meaning at least.

        As it is written, it would not be inaccurate to say this if their source was the lesswrong post, or even an earlier thread here on HN.

        Phrasing "A source with direct knowledge of the situation" might remove some of the leeway for editorialising, but without sharing what the source actually said, it opens the door to saying anything at all and declaring "That's what I thought they meant" when challenged.

        It's unfalsifyible journalism.

    • cess11 6 hours ago
      It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
      • thewebguyd 2 hours ago
        No, they aren't. No company has to cave to government pressure to do (or not do) something until there is a legitimate court order. Our companies are just spineless bootlickers and have been capitulating voluntarily and enthusiastically.
    • johnbellone 4 hours ago
      You forgot the '/s'.
  • senderista 2 hours ago
    Nobody forced Anthropic to bid on DoD contracts in the first place.
  • paxys 5 hours ago
    I interviewed at Anthropic last year and their entire "ethics" charade was laughable.

    Write essays about AI safety in the application.

    An entire interview dedicated to pretending that you truly only care about AI safety and ethics and nothing else.

    Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.

    In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.

    And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.

    • HelixSequencing 5 hours ago
      This tracks with what I've seen across the industry. The safety theater exists because it's great marketing — "we're the responsible ones" is a differentiator when you're competing for enterprise contracts and talent who want to feel good about where they work.

      The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.

      What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.

  • jwitchel 5 hours ago
    Look a rural electric coops like www.lpea.coop if you want a battle tested approach to an org structure that resists the inescapable profit dynamics of a corporation.
  • ryandvm 5 hours ago
    Well... there's only one way to find The Great Filter
  • t1234s 4 hours ago
    It would be interesting to experiment with one of these chat tools where you can throttle the safety, from zero to max.
  • bogzz 4 hours ago
    Does anyone have insight into, or an interesting source to read, on what exactly Anthropic/OpenAI are doing/can do for a military? Reporters are unsurprisingly fearmongering about Claude "being used in surveillance, autonomous robots, and target acquisition" but AFAIK all Anthropic does is work with LLMs.

    Are people really attempting to have LLMs replace vision models in robots, and trying to agentically make a robot work with an LLM?? This seems really silly to me, but perhaps I am mistaken.

    The only other thing I could think of is real-time translation during special ops with parabolic microphones and AR goggles...

    • sigbottle 4 hours ago
      You're thinking too advanced. What kind of automated system is good at scanning semantically trillions of chat logs and finding nontrivial correlations, for example? 10000 codex 5.1s can easily crawl through that in a few days, probably.

      It's just systems plumbing (surveillance) and AI. It's a combination of weaker technologies and consolidation of power.

      This does not require a physical robot super AGI(though I would not be surprised if fully autonomous robots are not on the table already)

      • bogzz 4 hours ago
        Ah, well that makes sense. In that case, it's another tool in the toolbelt, not a plug-and-play drone brain, as some reporters amusingly make it out to be.
  • kseniamorph 2 hours ago
    > The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

    ok lol what a coincidence.

    but setting aside the conspiracy. the article actually spells out the real reason pretty directly: Anthropic hoped their original safety policy would spark a "race to the top" across the industry. it didn't. everyone else just ignored it and kept moving. at some point holding the line unilaterally just means you're losing ground for nothing.

  • gigatexal 1 hour ago
    They’re going to cave to keep the legation from destroying their business. This admin has gone full idiocracy.
  • Aeroi 5 hours ago
    the administration continues to poison and insert itself into all aspects of American society.
  • ozozozd 4 hours ago
    This drama arc of “I used to be so pure and good, but others made me evil” is so tiring.

    I really miss the nerd profile who cared a lot more about tech and science, and a lot less about signaling their righteousness.

    How did we get so religious/narcissistic so quickly and as a whole?

    • butterbomb 4 hours ago
      > How did we get so religious/narcissistic so quickly and as a whole?

      We built a behemoth that rewards attention whoring and anti social behavior with money.

    • kerblang 3 hours ago
      One might argue that this corresponds to the general shift of the political left towards these things. Old pre-turn-of-century tech was a much more libertarian left. Notice how a lot of the 50-something gen-X CEOs (and others) were once "left" but are now hated by that group, and more likely to go over to Trumpism. Obvious case in point: Elon

      The entire playing field is kinda dissapointing, left or right. Which do you wanna be, self-righteous preening snob or batshit macho man?

      I'm going for a blend, myself

  • ramuel 2 hours ago
    This was always just a marketing gimmick to try and crush competitors using "safety" and fearmongering. Reminds me a bit of "don't be evil." Convenient catchphrases and mission statements for companies in their infancy, but immediately thrown out when more money can be made.
  • youknownothing 3 hours ago
    Facebook said they'd always be free for everyone, now they offer subscriptions.

    Netflix said that they'd never have live TV, or buy a traditional studio, or include ads in their content. Then they did all three.

    All companies use principled promises to gain momentum, then drop those principles when the money shows up.

    As Groucho Marx used to say: these are my principles, if you don't like them, I have others.

  • PeterStuer 4 hours ago
    We wont push forward unless you push forward is textbook market collusion.

    Even if it were ever done with good intentions, it is an open invitation for benefit hoarding and margin fixing.

    Do you realy want to create this future where only a select few anointed companies and some governments have access to super advanced intelligent systems, where the rest of the planet is subjected to and your own ai access is limited to benign basal add pushing propaganda spewing chatbots as you bingewatch the latest "aw my ballz"?

  • drudolph914 5 hours ago
    this is the “chronological newsfeed to auto curated newsfeed moment” but for ai/anthropic … _great_
  • FrustratedMonky 6 hours ago
    This was under duress that government was going to use emergency act to force them anyway.

    I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.

    They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.

    • bix6 6 hours ago
      > Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.

      Makes perfect sense!!

      • coldtea 6 hours ago
        Regardless of any specifics, I don't see any contradiction.

        If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.

        • hluska 4 hours ago
          I’m not sure what definition of supply chain risk they’re working off of. For NATO to consider an organization to be a supply chain risk, it implies that usual controls (security clearances and the like) wouldn’t be sufficient to guarantee the integrity and security of the supply chain. If that’s the operating definition, I see the contradiction- it’s arguing that a company cannot be trusted to voluntarily work within supply chains but can be trusted enough to be compelled.

          If they’re operating under a different definition of supply chain risk, I don’t have a clue.

        • FrustratedMonky 5 hours ago
          The "supply chain risk" option is to remove that company from the supply chain all together. The 'risk' is because the company is compromised by a foreign entity.

          It is not about disciplining them to get better.

          1. So one option is about forcing them to produce something. You must build this for us.

          2 The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.

          So . Contradictory.

      • HardCodedBias 4 hours ago
        Of course it can do both. They are synergistic.
    • coldtea 6 hours ago
      >This was under duress that government was going to use emergency act to force them anyway.

      Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.

    • bigmadshoe 6 hours ago
      This is an unrelated change to the government’s demands.
      • patgarner 5 hours ago
        That's what they're saying, but the timing...
    • motbus3 6 hours ago
      They have been caught lying multiple times, about this, about the system capabilities, about their objectives.
  • wahnfrieden 4 hours ago
  • jMyles 4 hours ago
    I pray that we can all get to the following simple standard:

    * AI and states cannot peacefully coexist, and AI is not going to be stopped. Therefore, we must begin to deprecate states.

    I think it's very unlikely that this is unrelated to the pressure from the US administration, as the anonymous-but-obvious-anthropic-spokesperson asserts.

    We're at a point now where the nation states are all totally separate creatures from their constituencies, and the largest three of them are basically psychotic and obsessed with antagonizing one another.

    In order to have a peaceful AI age, we need _much_ smaller batches of power in the world. The need for states that claim dominion over whole continents is now behind us; we have all the tools we need to communicate and coordinate over long distances without them.

    Please, I pray for a gentle, peaceful anarchism to emerge within the technocratic leagues, and for the elder statesmen of the legacy states to see the writing on the wall and agree to retire with tranquility and dignity.

    • noumenon1111 31 minutes ago
      That's hilarious, and very sweet.

      Humans are, by nature, forgetful and argumentative. Fourteen hundred years ago, the Qur'an said this unequivocally (20:115, 18:54, 22:8, 18:73). Not to moralize here, I'm just saying if camel-herders could build a medieval superpower out of nothing, they knew something we don't.

      Any state or system that insists good humans are always nice, smart, cogent, and/or aware is doomed to fail. A Washington or a Cincinnatus that can get out of his own way (and that of society) is rare indeed, a one-in-a-billion soul. We shouldn't sit around and wait for that, while your run-of-the-mill dictator in a funny hat (or a funny toupée for that one orange fellow) has his way with us.

  • jonathanstrange 4 hours ago
    That's exactly how it was predicted in various scenarios that were decried as science fiction not too long ago. AI is going to be weaponized at lightning speed, and it's going to kill people soon -- or, to be more precise, it has already killed a large number of people in a place I don't want to mention.
  • freejazz 5 hours ago
    Could not see this one coming!
  • josefritzishere 6 hours ago
    What could possibly go wrong?
  • baal80spam 6 hours ago
    Of course they do. You would have to be delusional to think that they won't, at some point.
    • gadflyinyoureye 6 hours ago
      I know the Department of War wanted them to drop some features. Is this the response?
      • MSFT_Edging 6 hours ago
        FYI, "Department of War" still isn't the official name, but an unofficial secondary title.

        You can be correct and not play into their game by ignoring the name change completely.

        • baggachipz 6 hours ago
          I do so from the Gulf of Mexico.
      • ru552 5 hours ago
        The article says the policy change is separate and unrelated to Anthropic’s discussions with the Pentagon.
    • cmrdporcupine 6 hours ago
      What's "entertaining" is more the speed at which it's happening.

      It took Google probably 15 years to fully evil-ize. Anthropic ... two?

      There is no "ethical capitalism" big tech company possible, esp once VC is involved, and especially with the current geopolitical circumstances.

      • reasonableklout 1 hour ago
        How did they evil-ize? The new Responsible Scaling Policy is still the most transparent out of all the labs. And there are the separate principles they’ve stipulated for the Pentagon, under which they’re facing threat of nationalization or being declared a supply chain risk
      • drzaiusx11 6 hours ago
        The acceleration of Anthropic's evil timeline must be from all those AI productivity gains we hear so much about.
      • sigmoid10 6 hours ago
        Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
        • nozzlegear 5 hours ago
          > department of war

          Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.

        • grim_io 5 hours ago
          There is no department of war.

          It's just a silly woke secretary choosing their own imaginary pronouns.

        • coldtea 6 hours ago
          Shame they had to "coerce" such angels, who'd never do evil for profit otherwise...
      • menaerus 5 hours ago
        I don't think it's fair to call out Anthropic to have become evil-ized while they were quite literally forced by the gov into that decision.
        • johnbellone 4 hours ago
          They did not get forced.
        • cmrdporcupine 5 hours ago
          Anthropic has been doing these things independent of what the US admin has publicly asked for, even before Hegseth started breathing down their neck. They were already taking DoD contracts and like, just like the rest of them. Hegseth, with the skill all schoolyard bullies have, simply smells their weakness and is going for the jugular now.

          They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.

          At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.

          Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.

      • oldcigarette 3 hours ago
        Citation needed - see google and project maven. Of course that is all well in the past now - but for a brief moment google was capable of taking an ethical stance.
  • jollymonATX 3 hours ago
    Claude ethics maxxers cope thread
  • nautilus12 6 hours ago
    Absolute power corrupts absolutely
    • jayrot 3 hours ago
      "Power doesn’t corrupt. It reveals." — Robert Caro
  • heliumtera 3 hours ago
    What is the significance of a company making a promise?

    "We promise are not going to do __, except if our customers ask us to do, then we absolutely will".

    What is the point? Company makes a statement public, so what?

    Not the first time this company puts some words in the wind, see Claude Constitution. It's almost like this company is built, from ground up, upon bullshit and slop

  • outside1234 5 hours ago
    Does this mean they knuckled under to Trump and are going to build "whatever brings in the dollars" now?
  • retinaros 4 hours ago
    people downvoted me when i said this will happen and that they will also hve ads even tho they spend money saying they wont have. people believing anthropic are the same that put into office an old man with dementia
  • jccx70 1 hour ago
    [dead]
  • black_13 4 hours ago
    [dead]
  • ck2 6 hours ago
    [flagged]
  • user3939382 6 hours ago
    [flagged]
    • lucasban 6 hours ago
      I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
      • user3939382 5 hours ago
        You’re right! It looks like more of an FTC/CCPA issue.
    • ezst 5 hours ago
      I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
      • SJMG 5 hours ago
        I agree with you on everything here up-to safety. There are lesser forms of safety than somehow averting a terminator scenario (the fear of which is a bay area rationalist fantasy which shrewd marketers have capitalized on)
      • user3939382 4 hours ago
        “believe” yes in the sense that my program believes x=7. Actually when it goes to read it maybe the bit flipped. Everything on machines is probabilistic that’s a tautology. However we have windowed bounds on valid output, and Claude being able to build a context in which its next decisions are trained on it being an angry vengeful god is not inside that window. That’s what “safe” means, as one of many possible examples.

        Inner workings were determined by me, not the LLM. It assisted in generating inputs which had 100% boolean results in the output.

    • chris_st 6 hours ago
      Just out of curiosity, which version of Claude?