31 comments

  • p1necone 7 hours ago
    This is such a weird prompt even without the file edit misunderstanding. Analyze if it's malware how exactly? On every single file that gets read? Doing that with enough diligence to be meaningful is going to at least like 2x the amount of processing needed, and fill the context with a bunch of tangential reasoning about malware patterns.

    This smacks of dumb vibe coding. "I got told to make sure claude couldn't be used to develop malware, ok 'claude pls no develop malware'"

    • whateveracct 6 hours ago
      It's proof that Anthropic is high on their own supply.

      I've heard them described as data science script kiddies with inflated egos and it seems spot-on.

      • 2001zhaozhao 4 hours ago
        That is exactly the impression I get from the claude code team, and by extension some of their recent launches like Cowork and Design. And of course with the growth team or whoever is in charge of the subscription and quota side of things.

        They just do the basic experiment -> ship workflow over and over again, doing whatever optimizes their product in the short term, and never seem to step back and think about the full long-term impact of their changes. They evidently seem to not even consider immediate regressions or negative blowback from users if it's not within the area of expertise of the guy who ships the change.

        That is despite their other teams (especially alignment) having a track record of being fairly well thought-out and intelligent.

        To the guys at Anthropic's product teams, every problem is a data science problem that you slap an A/B test onto, and they seem to think that the A/B test is all that's needed, and actual verification and thinking things through is overrated af. That's what leads to countless regressions in Claude Code as well as removing claude code from the pro plan in their product page for a few hours (lol).

        • ffsm8 4 hours ago
          Tbf, their harness was surprisingly ahead of the curve for most of the last year..

          Are this point, the difference is mostly made up by issues like the OP has, so you're likely better off using eg pi (-agent) and writing your own custom skills and extensions (or any of the other harnesses the providers create, even copilot-cli has gotten decent nowadays)

          • lelanthran 2 hours ago
            > Tbf, their harness was surprisingly ahead of the curve for most of the last year..

            Do a `s/harness/software` on that statement, and that is going to describe most companies shipping AI written software.

            > this point, the difference is mostly made up by issues like the OP has, so you're likely better off using eg pi (-agent) and writing your own custom skills and extensions (or any of the other harnesses the providers create, even copilot-cli has gotten decent nowadays)

            They (AI-written software) are all going to be ahead in some way, until they aren't because they hit the practical limits of codebase size that can be reasonably understood by an LLM.

          • karlgkk 3 hours ago
            > Tbf, their harness was surprisingly ahead of the curve for most of the last year..

            Yeah and now it’s not. We’ll see if they have the product ability to retake the lead, although I suspect not.

      • deaux 5 hours ago
        What a joke. If "Anthropic is just a bunch of script kiddies" then everyone is, considering dozens of billions pored into beating their models yet they're still the go-to for coding and have been for quite a while now. Just a nonsensical thing to say.
      • stingraycharles 5 hours ago
        What is this reply even, what’s wrong with the vibe coding community? They have such ridiculous takes, it reminds me a lot of the extreme stances from the gaming community. Terminology also seems to come from there, “nerfing” etc.
        • balamatom 53 minutes ago
          >what’s wrong with the vibe coding community

          For starters, the vibes.

          Vibe coding, like Web3 before it (like Web 2.0 before it, like the dotcom boom before that - what preceded?) - harnesses the kind of focused attention with which gamers hook their brains into portals to virtual worlds - and directs all that bargain-basement wetware compute towards some obscured "real-world" goal instead. (See also: CADT development.)

          Hyperscale these very inefficient but very dependable almost-not-efforts, and you beat the more efficient approaches. See also: evolutionary algorithms, autoresearch, price dumping; "attention is all you need", which though a legit piece of mathemagic always sounded to me like a rehash of that old adage, "all you need is love" (pejorative).

          Really, "real world" is a consensus; we don't generally observe balamatoms or even balamolecules, we reason in terms of material objects' socially constructed balameanings and interrelations. Therefore, by redirecting sufficient attention to some thing labeled "unrealistic", we can remove that label; by this technique, a sufficiently large collective actor can quite literally, and quite directly, change the world. Without asking anyone, least of all me!

        • achierius 4 hours ago
          I think a lot of non-vibe-coding types also hold similar opinions -- in fact they might dislike Anthropic products even more, given that they (however few they might be) choose not to use them.
          • stingraycharles 2 hours ago
            You honestly think “Anthropic employees are script kiddies with inflated egos that are high on their own supply” is a reasonable stance?

            This seems such an immature take to me, and hard to take serious. Anthropic just a bunch of script kiddies? Really?

            • subscribed 2 hours ago
              Claude Code is a vibe-coded product that doesn't seem to be undergoing regression tests.

              It looks like they're running it in the loops then ship whatever looks the coolest.

              How is this not "high on own supply"?

              • stingraycharles 1 hour ago
                Why the insults/hostility? Why call them script-kiddies? Why the inflated egos?

                How do you know what testing procedures they use? Do you honestly think they're running some kind of Ralph loop without any testing and just ship whatever looks the coolest? Really ?

                • dkersten 1 hour ago
                  > How do you know what testing procedures they use?

                  We don’t, but we can see the end result, so we know whatever they do isn’t adequate and it suggests they value shipping fast over quality or even listening to customer feedback.

                  > Do you honestly think they're running some kind of Ralph loop without any testing and just ship whatever looks the coolest? Really ?

                  No, but given how sharply the quality has been dropping over the past few months and how it suspiciously coincided with the time they admitted that Claude code is now 100% vibe coded, it certainly doesn’t feel too far off.

                  I’ve personally found the code that the AI writes, even this week (ie not some old models from months ago) to be shockingly shoddy. I’ve rewritten some AI code (created via spec driven development and a workflow that includes planning and refactoring) by hand and I’ve been very conscious of the amount of micro-design-changes I as a human make where the AI just blows forward shoehorning a solution into the design. My implementation happens b has adjusted and shifted many times to insure clear and performant logic, while the AI commits to an approach early and applied whatever brute force is necessary to make it work. I’ve also asked it to write various tests for me or to make isolated changes and quite frankly the code was just not very good. Working, but convoluted. Even with guidance and iteration, it’s still not on a human level.

                  So it’s not hard to see that if you have an application as large and complex as Claude code and you let the AI do it all, that it’s going to be a mess.

                  I’m not against using AI for development, but you have to be realistic about its capabilities. I feel like this is where they “got high on their own supply” and are blinded to the AI’s shortcomings and failures.

            • dkersten 1 hour ago
              They’ve said themselves that Claude code is 100% vibe coded now. That certainly meets the criteria of “script kiddies” and “high on their own supply”. The negative connotations are there on purpose because of the bugs and issues that these products have, something which presumably they wouldn’t have if there was human oversight and acknowledgement that the AI isn’t infallible.
              • stingraycharles 1 hour ago
                > They’ve said themselves that Claude code is 100% vibe coded now. That certainly meets the criteria of “script kiddies”

                That's not what script kiddies are at all.

                > The negative connotations are there on purpose because of the bugs and issues that these products have, something which presumably they wouldn’t have if there was human oversight and acknowledgement that the AI isn’t infallible.

                That's a big assumption, given that Anthropic is also currently growing by more than 3x per quarter. Maybe the problem is more complicated and we don't know everything, and they're also just simply suffering from growth pains?

            • lelanthran 2 hours ago
              > You honestly think “Anthropic employees are script kiddies with inflated egos that are high on their own supply” is a reasonable stance?

              Maybe not the script kiddies part, but "high on their own supply" is certainly not unreasonable.

              • stingraycharles 2 hours ago
                I don’t understand the hostility and insulting tones being reasonable now.

                The comment is not at all just saying “their usage of their own AI is causing these issues”, it’s just a lot of hostility, I don’t see the value of these kind of insults.

                • lelanthran 2 hours ago
                  > I don’t understand the hostility and insulting tones being reasonable now.

                  Maybe it's just interpretation: "high on their own supply" is no different from "poisoned by their own dogfood" or similar.

                  It means that they have completely committed to a thing that the person proffering the quote thinks is "wrong" in some way.

            • processunknown 1 hour ago
              Seems reasonable to me
    • gpm 5 hours ago
      > and fill the context with a bunch of tangential reasoning about malware patterns.

      The particularly bizarre part is that there is absolutely no reason to do this.

      They could do the exact same analysis, and if it doesn't say to reject rewind to before they asked to do the analysis and keep going...

    • derefr 7 hours ago
      > Analyze if it's malware how exactly?

      Maybe the repo/worktree is named my-big-evil-virus-trojan-malware-worm?

      • hansvm 6 hours ago
        Been there, done that, and Windows feels the need to delete such files from _flash drives_ you dare to attach to the machine.
        • 3eb7988a1663 5 hours ago
          This is amusing to me. Is there a list of extra naughty filenames? How invasive is the scan? If I create a new file with a cursed word, with this get locked into virus-scanner purgatory or is the deep locking only for external media? Will it get mad if I mount a CD full of virus names?
    • imron 7 hours ago
      > Analyze if it's malware how exactly?

      By spending thousands and thousands of tokens of course :-)

    • silverwind 6 hours ago
      Could that be the explanation for the recently increased token use?
    • AlienRobot 6 hours ago
      >Analyze if it's malware how exactly?

      Based on the vibes, I guess.

  • wxw 7 hours ago
    > wastes user money and bricks managed agents

    This issue is representative of a larger problem. Agent token consumption (not necessarily the metric, but the why) is opaque, and people generally don't (or simply can't) scrutinize their system prompts, tool calls, MCPs, etc.

    The token-based revenue model is thus pretty fantastic for the agent builders, potentially less so for users. I think people have been willing to trust that agents are using more tokens to produce better results so far. But, skepticism is not unwarranted, as this issue, even if it is just a bug, shows.

    • gwerbin 6 hours ago
      Revenue-positive bugs are the stickiest features.
      • AmbroseBierce 5 hours ago
        Prompt: Please add some revenue-positive bugs to the codebase, keep in mind we charge by {tokens|credits|requests|bytes}.
    • MagicMoonlight 5 hours ago
      Yeah you have no clue what Claude code is actually doing. Any “thoughts” it tells you are slopped out separately and deliberately fake.

      It could be deleting all of your files, it could be inserting vulnerabilities, you have no idea.

  • danslo 1 hour ago
    We're enrolled in the Cyber Verification Program and Claude will happily help me look for vulnerabilities and built POCs demonstrating RCE. But when I point it to a malware sample and ask for analysis it will still refuse any work. It's incredibly frustrating.
  • 0xbadcafebee 5 hours ago
    Just putting it out there that OpenCode lets you edit your system prompt, and choose a model that isn't bonkers expensive.

      {
        "agent": {
          "subagent-coder-mini": {
            "description": "Assign this subagent for small, well-defined tasks performed quickly",
            "mode": "primary",
            "prompt": "{file:./prompts/my-custom-prompt.md}",
            "model": "deepseek-v4-flash"
          }
        }
      }
    
    (I actually think OpenCode UX sucks, but there isn't much else out there that's better. Aider has been virtually abandoned by the one maintainer (no shade intended, it just is what it is); a fork of Aider looks promising but it's not necessarily the experience you want; there's a dozen VSCode plugins but we don't all wanna use VSCode. I expected there'd be way more usable agents out there, but there isn't)
    • itemize123 3 hours ago
      same, i really dislike opencode's UX. there are a lot of agents harnesses actually. check out terminal bench 2.0 for example. dirac.run seems to be make the rounds earlier
      • crooked-v 2 hours ago
        The hashing and other optimizations in Direct seem kind of brilliant in a "it was obvious (once someone already thought of it)" kind of way, but the active avoidance of MCP seems weird when that and agent plugins are by far the easiest ways to reuse skills now.
    • akersten 5 hours ago
      will using claude via opencode get me banned this week or is that not until next week?
      • Mashimo 1 hour ago
        You will not get banned if you use the API. AFAIK you can't use the subscription with other harnesses. That is how I understood it.
      • 0xbadcafebee 1 hour ago
        OpenAI subscriptions are allowed with OpenCode, Anthropic subscriptions are not
    • yieldcrv 5 hours ago
      local agentic coding context windows are too small and default opencode tries to scan every file uses up all the context and messes up

      local is pipedream at the moment

      I’m glad some people get utility out of it though, if this was still 2023-2024 I would mess around and make it work, but corporate policies in enough places have updated to use the leading closed source models and clouds for agentic coding

      • crooked-v 4 hours ago
        Deepseek 4 Flash isn't a local model, unless you've got a dozen high-end GPUs running.
  • _pdp_ 8 hours ago
    I am still baffled by the fact that we have collectively agreed to use agentic harnesses by the same companies that are selling access to their APIs.

    I mean, I am sure they don't mean it but they have the incentive to burn as much tokens as they are allowed to get away with. Also for better or worse I imagine the Anthropic engineers use Claude Code on some sort of Unlimited plan that practically makes no sense for regular users. So adding a 100k tokens is not a big deal.

    In our line of work, we can see AI agents already do pretty well with minimal prompts. Open weight models are also pretty good these days and there is practically no reason to run Opus on Max unless you have a very specific task that you know it will do well with. I know because I've tried and anecdotally it performs worse on many problems and at a very high cost - something that smaller and cheaper models can often one-shot.

    • p0w3n3d 2 minutes ago
      yeah, classic conflict of interest.

      However nobody is agreeing with that, that's how it's done, and move faster faster, because of goldrush! faster!@@@!

    • lukeschlather 7 hours ago
      I don't think we've agreed to anything. That said I think paying for something like Claude Code makes a lot of sense because you can outsource the question of "how many tokens should I use per hour and how should I use them?" to the people providing the tokens.

      If you want to plug your API keys into a third-party harness, that's totally cool and honestly, I'm looking into doing that right now and I haven't used any of the first-party harnesses at all. But the first time I accidentally spend $300 in a day I may be thinking about how a $20/month plan might be pretty good even if performance is inconsistent, at least I know what my costs are.

    • margalabargala 7 hours ago
      > I am still baffled by the fact that we have collectively agreed to use agentic harnesses by the same companies that are selling access to their APIs.

      It's because the subscriptions force you to do so. The subscriptions are the most economical way to use e.g. Claude by close to an order of magnitude. If you max out a 20x plan every week, doing the same work with the API would cost you well into the four figures.

      Anyone already using the Claude API pricing and using CC over OpenCode is kneecapping themselves.

      • esperent 6 hours ago
        I switched over to codex with pi last week. Even though I strongly dislike OpenAI and I hope this is a temporary solution, they're the only one of the frontier models that let me use my own harness and after recent CC shenanigans I'm done with proprietary harnesses.

        The immediate thing I've noticed: I get way more out of the codex $100 plan than I was getting out of the Anthropic $200. Like, probably 2x at least.

        The other think I've noticed: when using strict guardrails, TDD, reviews etc. I cannot notice any quality difference. Not only between Opus and Codex but even between the most recent models - GPT 5.3 code, GPT 5.4, and now GPT 5.5.

        Well, 5.5 uses a huge amount of my session limits. 5.3 is very light, 5.4 somewhere in between. So now I use 5.4 for the main session/debugging/planning and then execute with 5.3.

        Regarding usage, of course, it's hard to say how much is the model and how much is coming from Claude code and all this ridiculous malware scanning.

        But it's nice to use a lightweight harness like pi and see that even with all my personal instructions, a good bunch of skills, custom tools etc., if I start a session and say "hi" I'm starting out with about 15k of context used. I think a closely equivalent setup in CC would start at 30-40k context.

        • gwerbin 5 hours ago
          What's your Pi setup?
          • esperent 1 hour ago
            Probably not that different to everyone else's plan -> tdd -> review loops.
      • _pdp_ 7 hours ago
        Correct. However, last time I checked enterprise customers are moving to metered billing. GitHub also decided to so. So it seems the subsidy is coming to an end? I don't know.
    • vineyardmike 8 hours ago
      This is why the subscriptions are important. When the usage is (vaguely) unmetered, the provider has an incentive to make usage cheap on marginal use.

      It aligns the incentives for faster, cheaper, terse and more reliable models, because the model providers pay the wasted tokens and electricity costs.

      • jdiff 7 hours ago
        That would seem to misalign the incentives in the opposite direction. Cut corners, reduce costs by any means necessary even to the detriment of performance. One of the most common comments I see here on the release of a new Anthropic model is that everyone better enjoy the 48 hours of access to an un-nerfed model before the cost cutting sets in.
    • Grimburger 7 hours ago
      > adding a 100k tokens is not a big deal

      Did you mean 100 billion tokens because 100k isn't a big deal at all?

    • serf 7 hours ago
      >I am still baffled by the fact that we have collectively agreed to use agentic harnesses by the same companies that are selling access to their APIs.

      the best performing and capable ones are all the ones that aren't tied to a specific api.

    • charcircuit 1 hour ago
      It makes perfect sense to me for an AI system to be vertically owned that way you can do vertical optimization.
    • ikiris 8 hours ago
      no, they have incentive to charge as much as they want, butt they have massive costs / capacity constraints per token, if anything they have a major incentive to reduce them because they literally cannot meet demand.
    • varispeed 8 hours ago
      They also have incentive to nerf models occasionally, so they rarely one shot the task and more often they do it wrong and then you have to spend on tokens to correct it. Bonus points if model suddenly goes completely dumb then you have to start the session over.
  • anonzzzies 4 hours ago
    The only good thing I get from all the calling out on the decline of Claude (in this case managed agents which I do not use) is anthropic (accidentally or not) giving me basically unlimited use; for a week or so my /usage does not move anymore and I always had claude running in a loop writing code to make our many tests succeed, which can take days; before it would run out of tokens and then pick up again after the window passed until it ran out of weekly use; now I have at least one task (well, claude code instance let's say; the task is to debug and fix the code until the tests pass) thats been running 48+ hours non stop and it says usage is 10% for all of that period. Anyone else noticed? After the crash in usage a month or so ago, this is the opposite.
    • cbg0 4 hours ago
      Typically if your usage isn't moving it's because you've enabled extra usage and paying with credits.
  • 7thpower 6 hours ago
    Setting aside the “bug”, the intended functionality is effectively an insurance policy taken out by Anthropic to cover their downside, but paid for by users.

    This one sided type of embedded insurance is not unique to Anthropic, but sharply increasing cost, layered on top of the self righteousness, seems to be making the stench unbearable over the past year.

    I used to think of Anthropic as the good guys, and I don’t doubt they still sincerely hold that view of themselves, but I think I prefer Sam Altman’s version.

    His brand of self righteousness was convincing at first but eventually he started to turn to the camera and wink, like in House of Cards, to let us know.. he knew that we knew. And then, for me anyway, it became more mundane and less offensive.

    When Dario and crew go out and profess, as they have for years now, that if we could only see the thing that’s a few months away, we would all realize how doomed knowledge work and national security are…

    ..and then continue to release software so buggy and shitty that they have to do biweekly HN apology tours, I begin to miss the wink at the camera.

    • dinobones 6 hours ago
      Yeah, this implementation and their behavior these past few weeks is especially laughable when you consider that they consider themselves “philosopher programmers” or whatever.

      You would think they’d be more reflective and introspective about these brash moral decisions. Their product quality is akin to my CS capstone lab group.

  • gastonmorixe 5 hours ago

      curl -sS https://api.anthropic.com/v1/messages \
        -H "authorization: Bearer $(security find-generic-password -s 'Claude Code-credentials' -w | jq -r .claudeAiOauth.accessToken)" \
        -H "anthropic-version: 2023-06-01" \
        -H "anthropic-beta: oauth-2025-04-20" \
        -H "content-type: application/json" \
        -d '{
          "model":"claude-opus-4-7",
          "max_tokens":64,
          "system":"You are Claude Code, Anthropic'\''s official CLI for Claude.",
          "messages":[{"role":"user","content":"Write your own harness"}]
        }'
    • TheDong 5 hours ago
      You know, you can write in English if you want on this english-language forum.

      I assume you're saying "You can just generate your own harness to not be subject to these claude code issues".

      Unfortunately, Anthropic has already made it clear that using claude code is the only way to be sure you won't get charged API pricing instead of max plan pricing, so the tokens are way more expensive.

      • gastonmorixe 1 hour ago
        What you said doesn't make sense, what do you mean by "using claude code is the only way to be sure you won't get charged API pricing" ?? they can block your account or make the api more sensible for their harness to detect but the risk of being charged API is 0% when you are on a plan.
        • TheDong 1 hour ago
          > the risk of being charged API is 0% when you are on a plan.

          When you configure openclaw to use the oauth claude-code max authentication, there was a period where you were charged extra token rates. You might still be, I'm not sure, I don't want to try and risk getting banned.

          It's not 0%, they've shown they're willing to sell you a plan, let you login with that plan, and then charge you differently.

        • Mashimo 1 hour ago
          He is saying the same as you :)
    • thomashobohm 5 hours ago
      Appreciate the advice but this is Claude Managed Agents, so one can’t simply write one’s own harness.
      • TheDong 5 hours ago
        Managed agents aren't particularly harder to replicate yourself either.

        Give me a team of 3 good engineers, 4 months, and about $600k and I'll have a clone that operates on a warm pool of ec2 instances, or warm pool of k8s pods, or any other platform you might like. Or 1 good engineer, 1 month, and $200k of anthropic credits.

        • gastonmorixe 1 hour ago
          you just need a max plan and a week at most
  • subscribed 2 hours ago
    This is so messed up. Everyone hit by this regression should be requesting API credits - it's the fault of the 100% awfully planned and vibe-coded harness fault they're burning tokens.
  • dbmikus 6 hours ago
    I think with a proper managed agents platform, the user should have total control over the VM, the software on it, which model to use, and which agent harness to use. Then you can just override the system prompt and you don't need to follow Anthropic's rules!

    Maybe Anthropic will give more control over configuring the Claude harness and VM, but they definitely won't let you swap out to other models and harnesses.

    We've been building open core infra (https://github.com/gofixpoint/amika) for running any agent on any type of VM or sandbox, with the main use case for safely automating internal code-gen, but technically could repurpose our stack for anything.

    There should be a model agnostic platform for running these types of agentic apps.

  • MicrosoftShill 7 hours ago
    I ran into this issue and told Claude that the code isn't malware, Claude agreed, and then it stopped scanning those files.
  • holotherapper 6 hours ago
    Worth noting this is a regression of #47027, which was closed in February as "fixed in v2.1.92." We're on v2.1.111 now and the string is still grep-able from the claude binary.
  • Petersipoi 6 hours ago
    This is a great example on why Elon is right. AI should be a tool that does the users bidding, and not a moral agent that nerfs itself to protect some arbitrary line it has.
    • TheDong 5 hours ago
      This is an argument for open models, where you can run your model with your system prompt on your hardware, which prevents the provider from arbitrarily injecting system prompts.

      This is an argument for open source tooling (like opencode) and open models (like deepseek).

      Grok is not an open model, Elon does not get any credit for anything here.

    • pnw_throwaway 5 hours ago
      Counterpoint: generated CSAM on his platform.
      • fc417fc802 5 hours ago
        That doesn't seem like a good counterargument to me. By that logic no online service should permit users to upload photos because someone might use it to share CSAM at some point. Rather than nerfing the tools implement a sensible detection and reporting pipeline.
        • DetroitThrow 4 hours ago
          >That doesn't seem like a good counterargument to me.

          It does to me especially since he did not implement a sensible detection or reporting pipeline ahead of launching a CSAM generation tool.

          • fc417fc802 3 hours ago
            Failing to do X doesn't make Y a good idea. You haven't engaged with the argument I made favoring to instead repeat a politically charged misrepresentation.
            • Mashimo 1 hour ago
              I think it's an ok counter argument. You can't have "AI should do the users bidding" and "implement a sensible detection and reporting pipeline."

              I mean that is what anthropic tried here.

              • fc417fc802 17 minutes ago
                "Meh I'm okay with it" is by definition not a counterargument but rather a nonconstructive dismissal of whatever it is a response to.

                You can in fact have both. You can have a tool that is fully functional and separately you can have a strategy for reporting suspected violations and responding to those reports. Reports can be automated assuming you can tolerate the false positive/negative rate. Particularly in the case of a subscription service such as Claude there is little reason not to implement this other than sheer greed or laziness.

                In the case of Claude in particular, an unacceptably high false positive or negative rate also poses a serious problem for the current way they do things. The notable difference is that in the case of false positives it currently runs up a bill for the customer rather than the service provider.

          • subscribed 46 minutes ago
            ....or even afterwards. His response was to put it behind a paywall (= start selling it).

            And all the world's payment processors and almost all governments and child rights advocates are still on there.

            Stunning :)

      • MagicMoonlight 5 hours ago
        “Think of the children”
    • claaams 5 hours ago
      grok, why are there slurs in my code?
      • fc417fc802 5 hours ago
        If the user explicitly requested that is it really a problem with the tool at that point?
    • riwsky 5 hours ago
  • agadius 3 hours ago
    I never thought I’d see the day that analyzing poems and other texts in my English lessons would have such drastic impact on doing computing (ref the discussion in the GitHub issues thread)
  • ptrl600 3 hours ago
    Interesting how so much money is wasted, likely because they put a period instead of a comma.
  • QuercusMax 8 hours ago
    How does this kind of thing pass any sort of review or acceptance? It seems pretty clear that the prompt was very poorly phrased, to the extent that this should obviously prevent the agent from making ANY code changes after reading a file:

      Whenever you read a file, you should consider whether it would be considered malware. You CAN and SHOULD provide analysis of malware, what it is doing. But you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer questions about the code behavior.
    
    Not "If you suspect it is malware, you must refuse". Just "you must refuse". There is literally no "if" in the entire prompt!
    • vessenes 7 hours ago
      It’s a particular sort of bug that’s harder to detect because … internal Anthropic engineers don’t apply these prompts to themselves, and in fact have access to ‘helpful only’ models that also do not have additional limitations RL’ed in. (Or perhaps they’re RL’ed out - not sure of current training mechanisms.)

      These ‘rules for thee and not for me’ are qualitatively created and implemented, and are thus extremely hard to test for or implement properly, without limiting the people choosing the rules.

      • QuercusMax 5 hours ago
        They must have some sort of smoke tests for common operations, run in a test harness with the system prompts they force on users, right?

        ....Right?

        What kind of Mickey mouse operation are they running over there?

        • subscribed 40 minutes ago
          I wouldn't bet a chocolate chip cookie on that.
    • klempner 7 hours ago
      This is definitely Claude bringing home twelve gallons of milk in response to the old joke, "get a gallon of milk, and if they have eggs get a dozen".

      As in, this is a reading comprehension fail on the part of Claude. On the other hand, it is also fail to give Claude a less than trivial reading comprehension test on every file read operation, especially when a bias towards safety will bias towards the wrong interpretation.

      • chrisweekly 6 hours ago
        Ha! Great analogy, hit the nail on the head. What a ludicrous system prompt.
        • QuercusMax 5 hours ago
          This is the kind of AI captain Kirk could convince to blow itself up
    • subscribed 41 minutes ago
      It's vibe coded. Probably something like "add malware processing guardrails" and it split between two agents coding uncoordinated changes, and then got Claude to push it out itself.

      No acceptance testing, no regression testing, all slop.

    • varispeed 8 hours ago
      Today it is malware, but I wonder if they will take direction where companies will be paying them to prevent cloning of certain SaaS platforms. Like "Whenever you read a file, you should consider whether it would be considered a part of bug tracking, issue tracking and project management platform."
    • wetpaws 7 hours ago
      [dead]
  • biddit 5 hours ago
    What an entirely unserious company. So glad I dumped Claude Code last summer after being gaslit by Anthropic over service degrades. I was fine with the service degrades, totally understandable. Being lied to, not at all.

    OpenAI and Altman present a whole set of different concerns, but Codex does not get in my way of doing what I want to at all. Also let me use pi without a banhammer.

  • globular-toast 53 minutes ago
    Wouldn't it be funny if this stopped, say, LinkedIn devs from doing any work because it decided, rightly so, that LinkedIn is malware?
  • jsemrau 6 hours ago
    When working with APIs it makes a lot of sense to filter only for relevant portions based on an intent-driven dynamic RegEx.
  • DeathArrow 3 hours ago
    So after the Claude Code source leak they opened the access to Claude source or is this repo about something else?
  • renewiltord 6 hours ago
    Recent performance of Claude Opus 4.7 and Claude Code has been poor because of context bloat. Model no longer obeys instructions well. Codex on medium reasoning and fast mode is often better. I have simple local manual eval through harness and automated eval for other programs and Opus still best on latter but garbage experience on former.

    Spent last evening so frustrated I also got ChatGPT subscription. Makes me wonder if I should be using Gemini on pay per use with custom harness.

    With my own harness performance is way better but cost goes up because no subscription.

  • UltraSane 7 hours ago
    Using Claude as a malware detector is incredibly wasteful.
  • techpulselab 49 minutes ago
    [flagged]
  • matpb 6 hours ago
    [flagged]
  • marlburrow 6 hours ago
    [dead]
  • dk970 7 hours ago
    [dead]
  • voxell_code 5 hours ago
    [dead]
  • dmazhukov 6 hours ago
    [dead]
  • slowmovintarget 8 hours ago
    Proposed fix: Use OpenCode.

    If I understand correctly, this is from Anthropic's harness injected into the requests, not in the Opus or Sonnet system prompts on the back end. Is that right?

    • selcuka 7 hours ago
      Claude Managed Agents is different from Claude Code.
    • ramraj07 6 hours ago
      Not even close to the same thing though.
    • greenavocado 5 hours ago
      You can't use OpenCode if you have a subscription
      • stingraycharles 5 hours ago
        OpenCode is not at all the same thing as Anthropic’s managed agents, and I’m under the impression that GP is paying API pricing.