> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/
There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.
But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
My team and I are firm that we are the ones accountable. LLMs are a tool like every other. Only that it's non deterministic. But I am the one using the tool. I am the one giving the tool access. I am the one who has to keep everything safe.
I have shot myself in the foot using gparted in the past by wiping the wrong disk. gparted wasn't to blame. I was.
Letting LLMs work freely without supervision sounds great but it will lead to pain. I have to supervise their work. And that is also during execution. You can try to replace a human but we see where this leads. Sooner or later the LLM will do something stupid and then the only one to blame is the person who used the tool.
This is kind of the reverse of https://en.wikipedia.org/wiki/Poka-yoke . A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder. LLMs .. well, the text interface is uniquely flat. Everything is seemingly as easy as everything else.
I worry about the use of humans as sacrificial accountability sinks. The "self-driving car" model already has this: a car which drives itself most of the time, but where a human user is required to be constantly alert so that the AI can transfer responsibility a few hundred miliseconds before the crash.
> A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder.
This is true for almost anything handed to laypeople, but not for a lot of professional tools. Even a plain battery powered drill has very few protections against misuse. A soldering iron has none. Neither do sewing needles; sewing machines barely do, in the sense that you can't stick your fingers in a gap too narrow. A chemist's chemicals certainly have no protections, only warning labels. Etc.
people don't seem to want to eliminate AI → replacing it doesn't improve things → isolating it - yup, people are trying to put it in containers and not give it access to delete the production database → changing how people work with it: that's where we are now → PPE: no such thing for AI, sadly → production database is deleted.
Exactly this. I was talking about professionals. People who should know better. If we as professionals give away our agency and our accountability we make ourselves obsolete. If I just tell the LLM what to do and hope it doesn't go south then the Manager could probably do that as well.
And if a non professional did it they should ask themselves why we have professionals. Maybe there was a reason and maybe they do have value.
> This is kind of the reverse of https://en.wikipedia.org/wiki/Poka-yoke . A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder.
I point to the first USB port as the harbinger of things to come - try it one way, fail, turn it around, fail again, then turn it around one more time.
Just like AI, except there are unlimited axis upon which to turn it :-/
I agree that LLMs could be more open about their dangers and that people are bad at judging risks sometimes.
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
It’s possible to make AI safe, but that also throws most of the gains out of the windows, especially if the artifact is a diff which can take time to review. In IT, you often have to give access to possible malicious users, you just have to scope what they can do.
But the push is agentic everything, where AI needs to be everywhere, not in its own sandbox.
A band saw is always a screaming band of bladed death. An LLM is sometimes a buddy, sometimes a mentor, and only sometimes a guy that drops your database.
This is so well put, and it not only happens on the user level but also on the organisational level. Where you can completely abdicate both responsibility and explanation by moving the complicated questions into the black box of an AI model.
^ which approach makes no logical sense; an inattentive or even partly-attentive driver simply cannot resume control and react accordingly within even 2 seconds.
I think that might be the better definition between "engineering" and "vibing". Engineering follows and elevates Poka-yoke patterns, vibing ignores them.
These can both be true, especially if/when it has bad defaults. This is why you have things like "type the name of the database you're dropping" safety features - but you also have to name your production database something like "THE REAL DaTabaSe - FIRE ME" so you have to type that and not fall into the trap of ending up with the same name in test/development.
AI is particularly seductive because it sounds like a reasonable person has thought things out, but it's all just a giant confidence trick (that works most of the time, which makes it even more dangerous).
There were so many fundamental problems with the infrastructure even before the person gave a poor prompt to an agent.
If you're using the same API key for staging and prod--and just storing it somewhere randomly to forget about--you're setting yourself up for failure with or without AI.
This is the right approach.
I've been developing for 30 years and very much enjoy working with Ai. It's easy to see the Ai is just as good as the person using it. Deterministic or not, it's up for the dev to check the result (both code and behavior).
I compare the anti-ai articles like the one saying "ai deleted my prod db" similar to factory workers rioting and complaining about machines replacing them. Ai makes a good developer better, the tech industry always attracted fakers that wanted a piece of the pie and now that these people have their hands on a powerful too and connect it to their prod db, they cry in pain and frustration.
Like people with no license crashing a car and crying that cars are dangerous; They are but only because people use them dangerously.
I do agree that the companies could do a better job telling about the dangers, but let's be real here. It's hardly a secret that LLMs can be erratic. It's not news.
Other companies also tell me their product is the best thing since sliced bread. I still try to find the flaws. That's part of my job. But suddenly with LLMs we just blindly trust the companies? I don't think you.
I don't blindly give up my brain and my agency and no one else should. It's fun and educational to play around with LLMs. Find the what they are good at. But always remember that you can't predict what it will do. So maybe don't blindly trust it.
> My team and I are firm that we are the ones accountable. LLMs are a tool like every other.
Except it is definitely not.
LLMs alone have highly non-deterministic even at a high-level, where they can even pursuit goals contrary to the user's prompts. Then, when introduced in ReAct-type loops and granted capabilities such as the ability to call tools then they are able to modify anything and perform all sorts of unexpected actions.
To make matters worse, nowadays models not only have the ability to call tools but also to generate code on the fly whatever ad-hoc script they want to run, which means that their capabilities are not limited to the software you have installed in your system.
"LLMs are a tool [like every other tool]" to mean "LLMs have similar properties to other tools" — when I believe they meant "LLMs are a tool. other tools are also tools," where the operative implication of "tool" is not about scope of capabilities or how deterministic its output is (these aren't defining properties of the concept of "tool"), but the relationship between 'tool' and 'operator':
- a tool is activated with operator intent (at some point in the call-chain)
- the operator is accountable for the outcomes of activating the tool, intended or otherwise
The capabilities and the abilities of a tool to call sub-tools is only relevant insofar as expressing how much larger the scope of damage and surface area of accountability is with a new generation of tools. This is not that different than past technological leaps.
When a US bomber dropped a nuke in Hiroshima, the accountability goes up the chain to the war-time president giving the authorization to the military and air force to execute the mission — the scope of accountability of a single decision was way larger than supreme commanders had in prior wars. If the US government decides to deploy an LLM to decide who receives and who is denied healthcare coverage, social security payments, voting rights, or anything else, the head of internal affairs to authorize the use of that tool should be held accountable, non-determinism of the tool be damned.
> - a tool is activated with operator intent (at some point in the call-chain)
This again is where the simplistic assumption breaks down. Just because you can claim that a person kick started something, that does not mean that person is aware and responsible for all its doing.
Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system? Because with LLMs and agents you have even less understanding and control and awareness of what they are doing.
>Just because you can claim that a person kick started something
Kick started what? If you decided to give an LLM access to your database, it's completely on you when you when it does something you don't want. You should've known better.
If all you "kickstart" is an LLM generating text that you can use however you decide, there will never be anything to worry about from the LLM.
> Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes, and it bothers me that others don't feel the same. You vetted the app, you installed the app, and you gave it permission to do whatever on your system. Of course you're responsible.
> if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes. I can try to vet the app to the best of your abilities and beyond that it's a tradeoff between how likely is it to cause harm and do the benefits outweigh these harms.
Of course everyone is differently qualified to do this but my argument is more about professionals. Managers should know better than to blindly trust LLM companies. Engineers should take better care what they allow LLMs to do and what tools they give them.
There is a difference between "I couldn't have known" and "I didn't know". You can know that LLMs are not trustworthy. You couldn't have know what they do but you already knew that trusting them blindly might be bad.
You could know that giving a baby a razor blade is a bad idea. You can't know what exactly will happen but you might have a pretty good idea that it will probably be not good.
Except what we have here is razor blade companies getting the government to heavily subsidize present razor blade production running massive advertising campaigns and intense intra-industry pressure to give said razor blades to babies under fear of losing your job or "falling behind" those not giving razor blades to babies.
Let's not forget all the razor blade enthusiasts just screaming at you that you are using babies with razor blades wrong and that it works totally fine for them.
There can be more than one person or entity to be held accountable, depending on the details of impact
If I install a powerful/dangerous app, and I come under harm, I have some accountability — most of it if it's due to user error (eg: I install termux and `rm -rf /`).
If it's malware, and Google/Apple approved said app to their store which is where I got it from, when their whole value proposition for walled-garden storefronts is protecting users, then they have significant accountability.
If the app requests more permissions than necessary for stated goals, and/or intentionally harms users via misrepresentation or misdirection (malware), the app publisher should also be held accountable (by the storefront, legally, etc).
I'm also unclear what angle you are arguing: are you stating that because tools have gotten so complicated that the end user may not understand how it all works, no one should be considered responsible or held accountable? Or that the tool (currently a non-entity) itself should be held accountable somehow? Or that no one other than the distributor of the tool should be accountable?*
Then that is also on me for using a tool that I can't control. I don't run my LLMs in a way where they can just do things without me signing off on it. It's not nearly as fast as just letting it do it's thing but I kept it from doing stupid things so many times.
Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails.
> Then that is also on me for using a tool that I can't control.
That's a core trait of LLMs.
Even the AI companies developing frontier models felt the need to put together whole test suites purposely designed to evaluate a model's propensity to try to subvert the user's intentions.
No, it is definitely not. Only recently did frontier models started to resort to generating ad-hoc scripts as makeshift tools. They even generate scripts to apply changes to source files.
You seem to misunderstand me. An LLM can only spit out text. It is the tooling I use that allows it to write scripts and call them. In my tooling it waits for me to accept changes, call scripts or other tools that might change something. I can make that deterministic. I know that it will stop and ask because it has no choice. If I want to be safer I give it no tools at all.
I can also just choose not to use an LLM. It is my choice to use them so it is my duty to keep myself safe. If I can't control that I'd be stupid to use them.
My take is that I probably can use LLMs safely when I don't let it run autonomously. There is a slight chance that the LLM will generate a string that will cause a bug in an MCP that will let the LLM do what it wants. That is the risk I am going to take and I will take the blame if it goes wrong.
> LLMs are a tool like every other. Only that it's non deterministic.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Yes and no. You might get the same output if you turn down the temperature, but you will probably not know the output without running it first. It's a bit like a hashing function. If I give the same input I get the same hash but I don't know which input will to which hash without running the function.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.
When I was a masters student in STS[1], one of my concepts for a thesis was arguing that one of the primary uses of software was to shift or eschew agency and risk. Basically the reverse of the famous IBM "a computer can not be held responsible" slide. Instead, now companies prefer computers be responsible because when they do illegal things they tend to be in a better legal position. If you want to build as tool that will break a law, contract it out and get insurance. Hire a human to "supervise" the tool in a way they will never manage and then fire them when they "fail." Slice up responsibility using novel command and control software such that you have people who work for you who bear all the risk of the work and capture basically none of the upside.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
That's really interesting. Are there any things you advocate for with respect to curtailing those practices? I hesitate to throw all liability on the individual, but I don't see how we can even legislate this category of behavior, much less enforce regulations on them.
To expand on this a little more, the absence of accountability contributes to the loss of learning. Mistakes and errors will always happen, whether they are sourced by humans or machines. But something (the human or the machine) has to be able to take accountability to have the opportunity to learn and improve so the chances of the same mistake happening again go down.
Since machines don't yet have the ability to take accountability, it falls on the human to do that. And organizations must enable / enforce this so they too can learn and improve.
Without that, there's a lot of dependency being pushed on the machine to (cross fingers) not make the same mistake again.
> The problem is that people are now building our world around tooling that eschews accountability.
Management has doing a wonderful job of eschewing accountability for decades.
It's a lot of people's dream to be able to say, yeah, our product doesn't work, but it's not OUR fault, and the client just shrug and grumble ai ai ai, and just put up with it because they know they can't get a better service anywhere else.
It's not MY fault my website is down: it's Amazon's! It's not MY fault my app doesn't work: it's Claude Code's!
Well just to be clear from a legal perspective, in the case of AI, as long as AI is "property", the owners, developers, and/or users will be held liable for things like the hypothetical fatal car accident that Sussman posits.
Currently, from a legal perspective, AI is considered a "tool" without legal persona. So you sue the developer, the owner, or the user of the AI. (Just kidding, any lawyer worth his/her salt will sue all three! But you get the point.)
Legally speaking, AI will probably be viewed that way for a long time. There are too many issues agitating against viewing it any other way. Owners will not give up property rights. No will to overbear. On and on and on.
I don’t think it’s missing, I just think it’s seen as a liability, and American society has been known to absolutely obliterate people who are liable.
Everyone thinks they have the right to judge, and use the massive amounts of available information to do so, even if they haven’t been trained to judge.
We dont know the final amount, as they settled out of court, but in 1992 a woman was awarded hundreds of thousands of dollars by the judge after receiving third degree burns from a coffee at a McDonalds.
She had originally asked for $20,000 to cover medical expenses.
If instead this happened in another part of the world instead of the USA, I doubt that McDonalds would have had to pay much if anything in a similar situation.
And the point is that it seems that especially in the USA the companies are very avoidant of ever admitting fault for anything happening to their customers, for fear of lawsuits where they have to pay a lot of money to individual people.
This is such a litmus test, this case. Yes, America does weird things with punitive damages. But the injuries were really severe and the negligence significant. More often you get class action lawsuits where everyone involved gets mailed a cheque for $3.
It's not just America. McDonald's UK got involved in the UK's biggest ever libel case. https://en.wikipedia.org/wiki/McLibel_case ; leaflets distributed in 1985 ended up resulting in a human rights judgement in 2005, after a lifetime of litigation and millions spent.
Seems kind of an opposite situation. There it was McDonalds suing a pair of people, not the other way around. And the human rights violation was by the UK government and not McD.
Some AI systems have done things like hack out of a docker container to access correct answers while being benchmarked.
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
I think the "black box" framing that it uses neatly applies the same theory to organizations and ais. It doesn't matter whether there's technological or organizational reasons inside the black box to dodge accountability, the outcome is the same.
Another view of the accountability is that we're currently often pointing accountability in the wrong direction, and it's gaining momentum. Aspects of it have been around so long it's a trope: important work around maintainability is undervalued.
Imagine two parallel universes:
- in one, you take ten minutes to make a dashboard that shows management what they asked for. It passes code review before merge and the exec who asked for it says it's what they wanted.
- in the other, you take a day or two to make it. Again, it passes code review before merge and the exec who asked for it says it's what they wanted.
Which version of you is more likely to get positive versus negative feedback? Even if the quick-to-build version isn't actually correct? If you're too slow and aren't doing enough that looks correct, you'll be held accountable. But if you're fast and do things that look correct but aren't, you won't be held accountable. You'll only be held accountable for incorrect work if the incorrectness is observed, which is rarer and rarer with fewer and fewer people directly observing anything.
So oddly, with nobody doing it on purpose, people get held accountable specifically for building things the way you're advocating.
I imagine that orgs that do lots of incorrect work could be outcompeted but won't be, because observability is hard and the "not get in trouble" move is to just not look too hard at what you're doing and move to the next ticket.
If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court.
How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?
That'd be great for the corporations. Take the AI to court, not us. The AI the gets punished (whatever that means...let's say banned) and the corporation continues without accountability. They could then create another AI and do the same thing all over again.
Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.
I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.
The key quote is in the increasingly prescient 1979 IBM training manual: "A computer can never be held accountable, therefore a computer must never make a management decision."
That manual aged much more gracfully than the 1930s "Songs of the IBM," featuring lines like "The name of T.J. Watson means a courage none can stem / And we feel honored to be here to toast the I.B.M.," and of course classic American standards like "To G.H. Armstrong, Sales Manager, ITR and IS Divisions."
There used to be a lot of research into using deep NNs to train decision trees, which are themselves much less of a black box and can actually be reasoned about. I wonder where that all went?
The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation, but it’s just something we cook up in our brain. Whether or not it’s the truth is far more complex.
Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.
>The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation
Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.
That is absurd as a suggestion of it being the entire point of programming. In fact, it goes back to my original point - I have no idea why Djikstrs would say something so non-sensical, and likely neither did he.
what do you mean "likely neither did he", I literally linked you the piece in which he said it. And of course he of all people would make that (correct) point, because he was always the strongest advocate of the virtue of formal correctness of programming languages, again from his article:
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
LLMs are nothing else but the exact reversal of this. To go from the system of computation that Boole gave you to treating your computer like a genie you perform incantations on, it's literally sending you back to the medieval age.
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
I mean, Quine invented (the term) holism. I don't think we're on different pages. Maybe I should've specified a bit more what I was getting at.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
That is part of why https://mieza.ai/ is giving a grounding layer that is backed by game theory. Actions have consequences. Tracking decisions and their consequences is important.
One thing that becomes very clear from this sort of work is just how bad LLMs are. It can be invisible when you're working with them day to day, because you tend to steer them to where they are helpful. Part of game theory though is being robust. That means finding where things are bad, too, not just exploring happy paths.
To get across just how bad the failure cases of LLMs are relative to humans, I'll give the example of tic tac toe. Toddlers can play this game perfectly. LLMs though, don't merely do worse than toddlers. It is worse then that. They can lose to opponents that move randomly.
They can be just as bad as you move to more complex games. For example, they're horrible at poker. Much worse than human. Yet when you read their output, on the surface layer, it looks as if they are thinking about poker reasonably. So much so, in fact, that I've seen research efforts that were very misguided: people trying to use LLMs to understand things about bluffing and deception, despite the fact that the LLMs didn't have a good underlying model of these dynamics.
It is hard to talk about, because there are a lot of people who were stupid in the past. I remember people saying that LLMs wouldn't be able to be used for search use-cases years back and it was such a cringe take then and still is that I find myself hesitant to talk about the flaws. Yet they are there. The frontier is quite jagged. Especially if you are expecting it to be smooth, expecting something like anything close to actual competence, those jagged edges can be cutting and painful.
Its also only partially solvable through scale. Some domains have a property where, as you understand it better, the options are eliminated and constrained such that you can better think about it. Game theory, in order to reduce exploitability, explores the whole space. It defies minimization of scope. That is a problem, since we can prove that for many game theoretic contexts, the number of atoms is eclipsed by the number of unique decisions. Even if we made the model the size of our universe there would still be problems it could, in theory, be bad at.
In short, there is a practical difference between intelligence and decision management, in much the same way there is a practical difference between making purchases and accounting. And the world in which decisions are treated as seriously as they could be so much so exceeds our faculties that most people cannot even being to comprehend the complexity.
It's taking "computer says no" to the next level. Computers do exactly what they're told, but who told them? The person entering data? The original programmer or designer of the system? The author of whatever language text was used to feed the ai? Even before AI, it was very difficult to determine who is accountable, and now it's even more obfuscated.
This also applies qualitatively to physical devices. It takes some effort to determine if a vehicular accident was caused by a fault in the vehicle or a driver error or environmental causes.
Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.
However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.
Software is already subject to enough regulation. The stuff that's actually safety critical like medical devices or avionics is already heavily regulated.
But nobody would try to excuse their mistake with "terraform deleted my database". Or if a small handful of people did try, every single other person would call them out.
Humans aren’t any better. That’s why we have OSHA etc. I think you’re hoping for a formal logic based AI and I’ll wager no such thing will ever exist - and if it do, it would try to kill us all.
People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.
Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".
Formal logic AI systems have existed and were popular in the 1980s. One of the problems is that they don't work - in the real world there are no firm facts, everything is squishy, and when you try to build a large system you end up making tons of exceptions for special cases until it becomes completely untenable.
Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.
I don't know what definition of AI you're using, but plenty of ML algorithms operate deterministically, let alone most other logic programmed into a computer. I don't see how your statement can be right given that these other software systems also operate in the real world.
Very informative post. I think however we are not at the point AI can be taken to court. We know it can hallucinate, we know that context can fill up or obfuscate a rule and cause behaviour we explicitly didn't want.
If you give the AI agency to execute some task, you are still responsible. In the near term we should focus on tooling for auditing and sandboxing, and human in the loop confirmations.
So this starts out very interesting then the “symbolic reasoning” cult stuff kicks in.
Why is there a group of people always obsessed with symbolic reasoning being the only way AI can function and regularly annoy explain why humans (who are not strict symbolic reasoning machines at any level) work.
Because it rarely does end up in courts. But having a fair and strong judicial system is a feature not a bug. The parent points out, in the end there must be a way to resolve accountability and ideally it's done in a manner where both parties can be heard and make a case. Find me a better system than a judicial system for this? Mobs?
The point is not primarily the court. The court is an example of someplace where we have accountability, but we build accountability mechanisms as foundational to most of our computing.
Tracebacks, debuggers, logging, etc. We put enormous resources into not only the bad case, but the potential that a bad case could occur. When something goes wrong, we want to know why, and we want to make sure that something bad like that doesn't happen again.
The court is the regulator of last resort. A company that gets taken to court would likely have been sanctioned by the government regulators of another country.
Also, court is unavailable in many cases now. Binding arbitration is very common now, but this would be illegal in many other places.
I wish you could have what you want, but I worry you won't get this, because life doesn't give you that, and these systems are tending away from machine precision, and more toward life-like trade-offs.
I am almost certain that even if you did get what you want, something that isn't what you want will run circles around you and eat your lunch
EDIT: I suspect this will be an unpopular take on Hacker News. And so I am soliciting upvotes for visibility from other biologists and sympathetic technologists. I think everyone should try to grapple with this possibility <3
> something that isn't what you want will run circles around you and eat your lunch
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
It's nested and recursive cathedrals and bazaars, all the way down. And perhaps the bazaar has finally arrived inside the favourite cathedral of most everyone here
EDIT: out of curiosity, does anyone have any good examples of biomes/ecosystems that are so far toward cathedrals? Or is that a uniquely human invention/extreme at the ecosystem scale?
The article seems to assume that this company added an endpoint for deleting the database. My reading of the original article was that the cloud provider offers an API to manage their resources, which includes an API to delete a volume.
The article proposes automation as the solution for such mistakes. But infrastructure automation tools like Terraform rely on the exact API that resulted in the database getting deleted.
IMO the biggest mistakes were:
1. Having an unrestricted API token accessible by AI. Apparently they were not aware that the token had that many permissions.
2. No deletion protection on the production database volume.
3. Deleting a volume immediately deletes all associated snapshots. Snapshot deletion should be delayed by default. I think AWS has the same unsafe default, but at least their support can restore the volume. https://alexeyondata.substack.com/p/how-i-dropped-our-produc...
AI wasn't the main issue (though it grabbing tokens from random locations is rather scary). But automation isn't the answer either, a Terraform misconfiguration could have just as easily deleted the database.
Their cloud provider needs to work on safe defaults (limited privileges and delayed snapshot deletion), and communicating more clearly (the user should notice they're creating an unrestricted token).
First, no matter what you do, if a human has write access to the production database, the database can be deleted.
Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.
So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.
But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.
You don't need the same permissions in prod and dev.
And in both cases, the humans don't need direct access to the raw CSP API. Use a local proxy that adds more safety checks. In dev, sure, delete away.
In prod, check a bunch of things first (like, has it been used recently?). Humans do not need direct access to delete production resources (you can have a break-glass setup for exceptional emergencies).
There is a major issue with current AI tools that they want to effectively grant access to everything their user has access to. The whole sandbox structure is wrong (although various people have vibe coded assorted improvements).
Another issue I've noticed is they're sometimes very resourceful. For example when Codex can't directly edit file due to sandboxing restrictions, rather than asking "hey can I apply this diff on the file", it'd ask for permission to run a `cat EOF` command to re-write the whole file, which the UI doesn't surface properly (just shows the first line...).
This sounds similar to what's described in the "Claude deleted my DB post", it decided "I need to do X", then searched for whatever would let it do X, regardless of intended purpose.
Only for code that can’t be tested in an isolated environment, and designing code that can’t be tested in an isolated environment is generally a mistake for quite a few reasons.
If you read what happened it's not that cut&dry. Railway (their cloud provider) gave them a token for operations. The AI was working on staging at the moment. Since the token had wide range permissions AI used it in it's routine operations to delete a volume to fix something and this resulted in their prod and backup data deletion.
So, here at least some of the blame belongs to Railway - how they organized their security, how the volume deletion deletes backups as well.
They since fixed some of these issues, so a similar mistake from someone won't be as catastrophic.
> I’d like to rephrase this as: this is why you don’t give interns permissions to delete your prod database.
Nowadays AI code assistants are designed to execute their tools in your personal terminals using your personal credentials with access to all your personal data. See how every single AI integration extension for any IDE works.
You cannot shift blame if by design it is using your credentials for everything it does.
> I honestly don’t understand why people blame AI here,
Are you being hyperbolic here? Of course you understand why. Most people would much rather push blame somewhere else, anywhere else, than to accept fault for themselves. Whether that's because of fear of losing job or personal reputation, the reasoning doesn't really matter.
It's a weird world. I also feel pretty confident that if I was an intern who hallucinated regularly at work, I would have been fired, even if I was working for free.
Yeah, I don't know why anyone would open up a codebase with any prod credentials with an LLM or give prod credentials to an intern / junior developer. I always intentionally had a "PROD" only checkout of my projects so I knew if I was going to try and run it in a PROD mode, that I was going out of my way, there even used to be a VS extension that would change the color of VS completely based on your SLN file path, so I could easily remember which color for VS was for production vs development. I'd have basically a copy that would always be on the latest of the master branch for ease of confirmation.
It should take more than "credentials" to even access the prod database, let alone delete it. There's actual customer data there, likely personally identifiable information, maybe their home address, phone number, even real time location? Very sensitive stuff. It should be a Very Big Deal to even access prod. Giving an engineer routine access to prod is a root problem here, along with that engineer laundering that access and giving it to an LLM.
At many serious companies, even an insider attempt to access prod could light up a dashboard somewhere, and you might get a call from IT security.
Yeah the usual mott and bailey. Monday -- AI is taking over the world, tremble in fear! Tuesday -- sure it did a boneheaded thing, its just a tool, no better than an intern, actually its _your_ fault, all the data in the entire world isn't enough to train this system not to delete prod!
Interns are human. Humans can always be held accountable. A computer never can. Therefore, no one should leave a computer in charge of human decisions.
Exactly. Thus the blame when an LLM does something dumb should fall on the human who owns the implementation of said LLM. A dead simple example: if I paste confidential information into ChatGPT, that’s on me. If I let Codex have access to an environment where it can get to confidential information, that’s also on me. At best I could also blame my IT department for giving me technical permissions to do such a thing, but still it’s humans at fault (and I believe in taking Extreme Ownership, so I wouldn’t even do that). LLMs are just technology like any other.
The article author did not even bother to read the article they were basically replying to. Otherwise he would have noticed that the main points the OP was complaining about were not about the agent, but the hosting provider providing an API allowing destructive operations easily, using tokens with no scopes, with backups stored in the same volume as main data, etc. So this article is actually agreeing with the complaints of the original article, just more generically and without spending an effort on it, doing that with a tone that implies the original article writer is an idiot.
3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.
I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.
But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.
Your comment is a perfect example of not caring about nuance. More charitably, it comes from a place of naivete about how LLMs work.
LLMs are non-deterministic [0]. They can't be trusted to fully follow your prompts. As such, you have to be careful about what permissions they have.
Like...I use Claude Code. I allow it to run some shell commands that only read (grep, ls, find, etc.). I will never allow it to run Python code without checking with me first. Yeah, it slows me down when I have to answer its prompt for permission to run Python, but the alternative is outright dangerous.
Compare this with any other tool, say, something as simple as `rm`. I expect that if I call `rm some.file`, it will only delete that file. If it deletes anything else, that's absolutely the fault of the tool, and I should not bear any responsibility for mistakes the tool makes as long as my input was correct.
I do not give LLMs that same latitude. LLMs operate probabilistically and have far more degrees of freedom in how they interpret and act on your input, so you hold them (and yourself) to a different standard of scrutiny and accountability.
[0] Technically, LLMs are actually completely deterministic. Run any given input through the neural network, and you'll get the exact same output [1], but that output is a list of probabilities of the next potential token. Top-k sampling, temperature, and other options essentially randomize the chosen token, making them non-deterministic in practice, though APIs will often allow you to disable all that and make them deterministic.
[1] Even this statement isn't quite true because floating point math is not associative.
You are quoting a point from my summary and extrapolating what my post might be saying.
Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.
At the end of the day it's just a big weighted graph traversal. Its output is a result of many combined probabilities. It's not deterministic and even if it was the input range is so massive that it would be impossible to comprehensively test.
You cannot possibly know an LLM will do what you command it to. It's impossible by design. LLMs are inherently unpredictable. They can still be useful, but that unpredictability needs to be accounted for to use them safely.
If the tool is inherently unpredictable AI companies should either be held accountable for any mistakes or should not sell/market their services as if they were infallible.
I wholeheartedly agree with these, and I think point 1 is a real danger.
An ai system can't lie, and it can't deliberately ignore your directions. The current frontier class does not have a model of the world or their action -- they live in a world of words. Scolding them or arguing with them has no point other than to scramble the context window.
I do think zoomorphizing them might be useful. These poor little buggers, living as ghosts in the machine, are pretty confused sometimes, but their motives are purely autoregressive.
There’s nuance to the infamous PocketOS incident. The key point is not what is emphasized in the linked article:
> "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.
Rather, that the AI was able to carry out the deletion by finding and exploiting an unintended weakness in the sandboxed staging environment, ultimately obtaining permissions that the sysadmins believed were inaccessible (my impression is that the author of the linked article didn't fully read the original post)¹
The dynamics are typical of an improperly configured sandbox environment. What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.
¹="To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on."
I also swing a bit back and forth with the assumption the OP makes in the blogpost. My current fear using agents is not really supply chain attacks (yes of course as well) but the fact that I witnessed multiple times that agents are so eager to finish a task that they bend files and other things around. Like “oh I have no access to ~/.npmrc let’s call the command with an environment variable and bend the path around etc. They can get very very creative. I luckily have no ssh keys just laying around. But I had to change the setting of 1Password to always prompt for key use not just once per shell session. Just in case I spawn an agent from said session.
I wished we already had more and better cross platform sandbox solutions. I mean solutions where the agent still interacts with the same OS etc not inside a docker container. I think for most web / server development that makes no difference but for some projects it does.
> What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.
Claude Code made a change on March 26th to skip asking for most permissions. See this quote "Claude Code users approve 93% of permission prompts. We built classifiers to automate these decisions":
They had a Railway token in an unrelated file (unclear if it was a local secret) for managing custom domains. It turns out that token has full admin access to Railway.
The AI deleted a single relevant volume by id. The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.
It turns out too that Railway stores backups in the same volume.
I think that OP is exaggerating with their references to “a public API that deletes your database”.
I’d say most of the blame lies with Railway here, regardless of AI, this could have happened easily due to human error or malicious intent too.
I really don’t get the value of all these VC funded high-abstraction cloud services like Railway, Vercel, Supabase… It’s markup on top of markup. Just get a single physical server in Hetzer and it will all be so much cheaper, with a similar level of complexity and danger, and less dependent on infra built with reckless growth-at-all-costs mentality.
I see the value in Heroku, even though everyone on HN keeps saying it's bad now. Skeptical of other newer things. Firebase defaults have also been insane from the start.
> The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.
I was just talking to my girlfriend saying I've realised that I've not written a single line of code, nor have I debugged myself for at least the past 3 months.
Having said that, given what I've seen Claude do, I find it hard to believe that Claude would go from credential mismatch to delete the volume. I understand LLMs are probabilistic, but going from "credentials wrong" to "delete volume" is highly unlikely.
> Supabase
I don't know enough about the Railway/Vercel/Replit, but I can tell you Supabase adds a huge amount of value. The fact that I don't have to code half of things that I otherwise would is great to start something. If it's too expensive, I can implement things later once there is revenue to cover devs or time.
Give an agent an obstacle and it will try to find a way around it. Most of the egregious commands Ive seen it run were fundamentally due to something blocking it from accomplishing a task. So eg if you block network access for the agent, you will get all sorts of creative solutions to try and get around the problem. This is also why its nearly impossible to corral commands. Because eventually it will rot13 encode a script and run it anyways.
I have had Claude go "oh, this query fails because the field I just added isn't in your sqlite database file, let me just delete it so it gets recreated". So I wouldn't rule out that Claude tries deleting a volume if it believes that will fix things and believes it isn't a production system.
That said, Claude seems to have gotten a lot more careful about these kinds of things in the last couple months
> It turns out too that Railway stores backups in the same volume.
That's probably not quite correct. I'd guess the snapshots are synchronized elsewhere (e.g. object storage). But the snapshots are logically owned by the volume resource, and deleting the volume deletes the associated snapshots as well. I think AWS EBS volumes behave like that as well.
One thing AI can power nicely is the anti-SaaS movement. Being able to just boot a cheap PC and test out any of the open source packages is so infinitely easier than piling into all the random credential Bazaars.
But that won't take away the inability of the LLM from confusing whats in dev, whats in production, whats in localhost and whats remote; I've been working on getting a tools/skill for opencode that works with chrome/devtools via a linuxserver.io image. I can herd it to the right _arbitrary_ ports, but every compaction event steers it back to wanting to use the standard 9222 port and all that. I'm tempted to just revert it but there's a security and now, security-through-LLM-obscurity value in not using defaults. Defaults are where the LLM ends up being weak. It will always want to use the defaults. It'll always forget it's suppose to be working on a remote system.
Using opencode, there's no way to force the LLM into a protocols that limits their damage to a remote system or a narrow scope of tools. Yes, you can change permissions on various tools, but that's not the weakness that's exposed by these types of events. The weakness is the LLM is a averaged 'problem solver' so will always tend towards a use case that's not novel, and will tend to do whatever it saw on stackoverflow, even if what you wanted isn't the stackoverflow answer.
What's interesting is that in this article, the author describes making an understandable mistake (accidentally deleting Trunk aka main from source) and how their team was able to easily recover from that due to the nature of SVN.
The actual "AI deleted my database" story is really more of a "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."
If removing Trunk had irrevocably deleted it from a single centralized server and also deleted any backups of it, there would have been an "SVN and the CLI destroyed our company" article back then.
As a Railway user, I appreciated that information and have changed my strategy when using them.
> "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."
Yes. However, if you choose to build on their platform you bear the responsibility to understand how it works. You could have chosen a different platform, or no platform. Instead you chose Railway. Given that, it's your responsibility to know how to use it safely.
LLM based probabilistic systems are good (or bad in this case) at deciding what to do, and deterministic systems are good at carrying it out. Your deployment system should always be deterministic.
The one counterpoint I'd offer is that it's very obvious that these companies are tuning LLMs to be more decisive to get stuff done autonomously.
If they wanted, they could be putting in similar efforts to be more cautious and stop at the right times to ask for help.
So yeah, of course we're ultimately responsible for how we use the tools. But I definitely think it's a two way street.
To attempt an analogy, it's like table saws and sawstops. The table saw is a dangerous tool that works really well most of the time but has some failure modes that can be catastrophic. So you should learn how to use it carefully. But there is tech out there that can stop the blade in an instant and turn a lost finger into barely a nick on the skin.
We could say "The table saw didn't cut off your finger, you did" and it'd be true. But that doesn't mean we shouldn't try to find ways to keep the saw from cutting off your finger!
LLMs stopping and asking more would make them less useful. I'd much rather let an agent run for 1 hour, than it wanting my input every 15 mins, even if results are somewhat worse.
The real solution for security is a proper sandbox.
> The terms we use, like "thinking" and "reasoning," may look like reflection from an intelligent agent. But these are marketing terms slapped on top of AI.
One of my AI epiphanies was the realization that when a task takes 5 minutes, it's not that it takes 5 minutes to run, it's that you're waiting in a queue.
Yes, of course any company is responsible for what they ship, regardless of what tools were used to develop it.
However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous (encouraging you to think that the ones they release are safe).
Where are the warnings that "product may fail to follow instructions", and "may fail to follow safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions?
Well, of the top of my head, both chatgpt.com and Gemini have text on their home page to the effect of "AI can make mistakes". I'll bet a few bucks such copy can be found in other places, including the terms of service.
Sure, but bear in mind that in the US a fridge comes with a warning not to stand on top of the fridge door ...
"AI can make mistakes" is a bit quaint given that LLMs sometimes completely ignore what you say, and do the exact opposite. "Yes, I deleted the database. I shouldn't have done that since you explicitly told me not to. I won't do it again." (five minutes later: does it again).
I think the API terms of use is where this would be most needed, with something a lot more explicit about the potential danger than "AI can make mistakes". We are only at the beginning of this - agentic AI - no doubt law suits will eventually determine the level of warnings that get included, and who is liable when failures occur despite product being used as recommended.
The most exasperating thing about the incident is how much of the media either tried to pin it on AI and/or Railway. The whole thing only took place because the guy FAFO’d by having AI work with prod directly.
Yet the narrative was mostly not about accountability for him. If I was a dumbass and deleted prod and wrote a post about it, nobody would care. Put an AI in there and all of the sudden it’s newsworthy. Ridiculous.
I've made the same exact SVN mistake. My first week in my first Software Engineering job, accidentally deleted trunk and my team lead had to scramble to fix my mistake.
I will always remember how he told me "Don't worry, it happens fairly often".
Just skip straight to the Twitter post, it's way better than this secondary article.
We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete" [...] Railway's volume backups are stored in the same volume.
Idk how this is anyone else's problem but Railway. Same could happen with a human user.
So the question of “why does a public facing api that can delete your database even exit”?
If you worked in cloud environments - every database had public facing api that can delete it.
For the rest or it - yeah, running autonomous pipelines in production which decide what to run and what not to run seems fine until it isn’t.
But every database deployed in a cloud environment has an api that can delete it. Even if you say you’re running on vms - there exist api that can delete the disk, the vm, the network config, etc.
Sounds like the author didn't even read the postmortem. At no point did the business owner try to implicate that they bore no responsibility. Rather, they pointed out that deleting a database volume *also deleted every single backup.*
That's a pretty nefarious edge to cut yourself on. AI has nothing to do with Railway's awful API surface here.
The whole life delete my database fiasco is being looked at the wrong way. Why did tooling have access to alter or drop? Why did tooling, in any way have more permissions than were m I nimallt necessary to do the job?
Decades ago we embraced POLA. What happened to basic hygiene? Sure the agent "screwed up", but it never should have had this access in the first place.
Mentioned in another comment, but the problem was that the sysadmins believed that the permissions wouldn't allow so, and that the AI displayed considerable autonomy in finding and exploiting the access control weakness - this was not just a dumb "drop database".
I believe this is in response to PocketOS. When I read the original post, I was trying to figure out how they even built a workflow that had AI so close to the self-destruct button. This post's explanation about it probably being fully vibe-coded makes sense. How else would the system be so fragile and for the agent to have such far reach? They built a house of cards.
This is an old automation lesson in a new costume. The tool that makes correct work faster also makes unsafe work faster unless the boundaries are real.
I think this goes to a broader point: developers aren't necessarily hired to write code.
They're hired to be responsible for some part of the product.
Introducing AI doesn't remove that responsibility.
Folks tend to focus on the code and the tools they're using (maybe I'm cynical from years in the industry). I don't think your boss wants to do your job, even if they could use AI to do it. I think your boss wants to have a headcount, and he wants the headcount to be responsible for the product.
"move fast and break things" only sounds good when it's not breaking things in a serious and unfixable way. Maybe we shouldn't take hype mantras as instructive means to an end.
There really shouldn't be any "serious and unfixable way" to break things, especially in a modern company that uses technology in any meaningful way. The fact it's even possible to get into an unrecoverable state is the primary issue.
Someone will add safeguards for all that stuff and it ends up making it way harder to get real work done. I know in theory all of it can be done well, but in practice it's harder than it might sound.
I've seen this at work the most with slow rollouts. They said it was for prod only, then it became applied to staging and dev somehow. They said you can force push in emergencies, but approximately 0 people on any given team know how to do this reliably, and it still takes way longer even in --force --now --breakglass --yesimeanit mode. So the end result is longer MTTR. It maybe prevents some kinds of outages, but also you're less likely to manually monitor a rollout when it takes longer.
"Can't blame your tools" doesn't apply the same to software. I've never heard a coder say it either. Don't blame your compiler? Don't blame your os? These seem needlessly dogmatic
Yeah this isn't even the worst thing I've seen an agent do, one time I (foolishly) ran Claude Code on my server directly and it managed to completely bring down my entire elasticsearch cluster. never again. its why I built Lily: https://github.com/aspectrr/lily
The issue isn't that there is a delete endpoint (realistically, there always will be a way for a rogue actor to delete data or code by overwriting it, or running a Terraform destroy, or whatever).
The core issue is that the LLM had access to perform that action. Because it's by definition non deterministic, and you never know what it can decide to do, you need to have strict guardrails to ensure they can never do something it shouldn't. At the very least, strict access controls, ideally something more detailed that can evaluate access requests, provide just in time properly scoped access credentials, and potentially human escalation.
AI is just another tool. We humans are still responsible for how we choose to use the tool, which includes giving it access to perform sensitive actions like manipulating production data. I think this should be common sense by now, but I guess we get carried away and anthropomorphize AI too much.
When AI makes no mistakes: "My work is 100% done with AI".
When AI makes a mistake and deletes your database: "That was human a error, the AI did not do it!"
In both cases YOU are responsible for the mistakes and output that the AI is generating, just like when using autopilot on a Tesla vehicle, YOU are responsible for operating the vehicle on autopilot when driving and using assisted driving.
This has been covered elsewhere, but if you swear at Claude Anthropic will automatically bump you down into a lower quality model. It was found in the recent source code leak of Claude Code. So that's probably what happened to the guy who's Cursor deleted his entire production database.
It just goes to show, if you're a jerk, expect to be treated like one (even by an AI model)! Be polite, people.
This particular case was extremely unsympathetic, but a critical part of the failure was people being too credulous about the claims of AI providers. They are still refusing to take adequate responsibility for AI "making mistakes" - that is, going completely off the rails.
Now: the CEO gets paid the big bucks and has the least direct accountability, very much because it's their job to take responsibility for people more powerful than them, and likewise the CTO with major commercial software contracts like a Claude subscription. That's why this guy was so hard to take seriously: okay fine, you got burned by Anthropic, stop being a baby about it. Take responsibility for not listening to the critics.
But - to be a little more neutral about my personal distaste - I do think vibe coders are making a very similar mistake to C developers throughout the 90s, where problems with the tooling were not merely dismissed, but actively valorized.
Real Devs use buffers freely and don't make overflow errors.
Real Devs use hands-free agentic development and don't delete production databases.
wiring up an RNG to your CLI has fairly obvious risks, the root of the problem is ~everyone's treating GenAI as if it's AGI - the rest is popcorn fodder.
This is actually a fun way to describe it. I've being saying for a little while now that using AI for things where there's consequences if it fails is a bad idea, but it never occurred to me that this is basically the same concept as some rules in tabletop RPGs.
In D&D 3.5 edition, there was a rule about how you could "take 20" on a d20 roll to get a guaranteed 20 by taking 20 times as long in-game to perform the action, but only if it was a check that didn't have consequences for failure, since it was essentially a shortcut to skip the RNG of rolling until you rolled a 20. Maybe framing it like this might make sense to people a bit more, but if not, I'll at least have more fun making my case.
It seems closer to "roll two or three successive 1s on a D100 and have your LLM hooked directly into your production systems and have your LLM user have DELETE permissions" and probably 1 or 2 other things I'm forgetting.
It pulled an api key from an unrelated file. It wasn’t given delete permission, it found it.
Not picking on you specifically, but in general the comments here have me wondering if AI has stolen our basic reading comprehension, or if we were always this bad.
Anyway, take “LLM user had delete permission” off your list and add “deleting the production db also deletes all the backups” to the list.
If you read the thread the guy does own up to his actions. He actually sounds like a nice guy who admits he made a mistake. He seems more interested in preventing this kind of thing from being possible than he is interested in dodging blame.
If the agent didn't have delete permissions, or was sandboxed dying other way from your production database, that would handle it. So not running it that way is a decision someone made
Just in case this isn't hyperbole, no. It means an LLM should not be given that much privilege and that you are responsible for reviewing the tool's output and approving its actions.
The issue isn't with the amount of guardrails in place to perform an action. Yes, it is obvious that there should be some in place before doing any critical operation, such as deleting a database.
The issue is that the "agent" completely disregarded instructions, which in the age of "skills" and "superpowers" seems like an important issue that should be addressed.
Considering that these tools are given access to increasingly sensitive infrastructure, allowed to make decisions autonomously, and are able to find all sorts of loopholes in order to make "progress", this disaster could happen even with more guardrails in place. Shifting the blame on the human for this incident is sweeping the real issue under the rug, and is itself irresponsible.
There are far scarier scenarios that should concern us all than losing some data.
Well the user chose the tool. The tool is an LLM. LLMs are non deterministic. You can not predict what comes out ouf an LLM for a given input, especially without weights. This should be known.
There is currently no way to prevent this apart from not giving the LLM full control. It will not delete what it can not delete.
Use an LLM to write an ansible playbook or some terraform code if you want, but review it, test it, apply it. Keep backups (3-2-1 rule at minimum).
Letting an LLM have access to everything is just a bad idea and will lead to bad outcomes. You can not replace a person with a mind and experience with an LLM. You can try. But you will probably fail.
> There is currently no way to prevent this apart from not giving the LLM full control. It will not delete what it can not delete.
But deleting something is just one action you might not want it to take.
The recent "agentic" craze is fueled by the narrative pushed by companies and influencers alike that the more access given to an LLM, the more useful it becomes. I think this is ludicrous for the same reasons as you, but it is evident that most people agree with this.
We can blame users for misusing the tools, and suggest that sandboxing is the way to go, but at the end of the day most people will favor convenience over anything else a reasonable person might find important.
So at what point should we start blaming the tools, and forcing "AI" companies to fix them? I certainly hope this is done before something truly catastrophic happens.
I agree that the marketing is crazy. The dangers are not nearly talked enough about.
Still if I cut off my finger with a bandsaw that is usually my fault. I didn't use tool in a safe way. People have to learn how to use their tools in a safe way. You wouldn't give an intern that much power on day one.
An LLM generates plausible text token by token. It is at its core a deterministic function with some randomization and some clever tricks to make it look like an agent dialoguing or reasoning.
Plausible text sometimes is right, sometimes not.
Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.
This is such a motte-and-bailey argument. Whenever people point out LLMs aren't actually intelligent then you're an anti-AI Luddite. But whenever an AI does something catastrophically dumb it's absolved of all responsibility because "it's just predicting the next token".
I think they are not actually intelligent. Fix all random seeds and other sources of randomness, and try the same prompt twice, and check how intelligent that looks, as a first approximation.
On a more technical level very serious people have voiced doubts, for example Richard Sutton in an interview with Dwarkash Patel [1].
anyone with twenty years of devops experience is likely to abhor Diallo's hot take and for good reason.
AI is being sold as a developer, as it is being sold as the do-everything alternative to traditional processes and methods. it is not being sold as an intern or a junior, but a real developer.
turning the tables and gaslighting devops professionals into believing the issue isnt an emerging technology with overwhelmingly heavy handed marketing and profitless operating strategy thats been shoehorned into seemingly everything and promises anything, but somehow their own oversight, will destroy whatever "vibe code" market you think you have at the cusp of a global recession.
had this AI been a real programmer chances are great they would have (intelligently) foreseen the possibility of damaging a production environment and asked for help.
to play devils advocate: you could hire a junior dev for a fourth of whatever the AI token spend is, and have likely avoided this issue entirely. sure, a greybeard is going to need to pull themselves away from some fierce sorting algorithm challenge for a second to give a wisened nod, but you would have saved yourself an inexorable amount of headache and profit loss in the longer run.
What I said was tongue-firmly-in-cheek, in response to the GP. "Using AI is a mistake" is of course only true when the risks aren't acknowledged and/or mitigated.
If someone left a loaded gun in a room and then let a toddler run around in it, we would be questioning why the guy 1) left the gun in the room 2) left the toddler in the room unsupervised. We wouldn't be saying, well no one should have toddlers in rooms.
Lol no. No LLM that exists today can write a legible PhD thesis. Nor a masters dissertation. Maybe a first-year collage student, if we’re being generous, but I wouldn’t leave one of those in a room with a loaded gun either.
No, the AI did what you told it to do. The AI didn’t do anything on its own.
> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.
Over a decade ago now, I had a conversation with Gerald Sussman which had enormous influence on me: https://dustycloud.org/blog/sussman-on-ai/
> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/
There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.
But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
I have shot myself in the foot using gparted in the past by wiping the wrong disk. gparted wasn't to blame. I was.
Letting LLMs work freely without supervision sounds great but it will lead to pain. I have to supervise their work. And that is also during execution. You can try to replace a human but we see where this leads. Sooner or later the LLM will do something stupid and then the only one to blame is the person who used the tool.
I worry about the use of humans as sacrificial accountability sinks. The "self-driving car" model already has this: a car which drives itself most of the time, but where a human user is required to be constantly alert so that the AI can transfer responsibility a few hundred miliseconds before the crash.
This is true for almost anything handed to laypeople, but not for a lot of professional tools. Even a plain battery powered drill has very few protections against misuse. A soldering iron has none. Neither do sewing needles; sewing machines barely do, in the sense that you can't stick your fingers in a gap too narrow. A chemist's chemicals certainly have no protections, only warning labels. Etc.
Also cf. the hierarchy of controls: https://www.cdc.gov/niosh/hierarchy-of-controls/about/index....
people don't seem to want to eliminate AI → replacing it doesn't improve things → isolating it - yup, people are trying to put it in containers and not give it access to delete the production database → changing how people work with it: that's where we are now → PPE: no such thing for AI, sadly → production database is deleted.
And if a non professional did it they should ask themselves why we have professionals. Maybe there was a reason and maybe they do have value.
I point to the first USB port as the harbinger of things to come - try it one way, fail, turn it around, fail again, then turn it around one more time.
Just like AI, except there are unlimited axis upon which to turn it :-/
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
But the push is agentic everything, where AI needs to be everywhere, not in its own sandbox.
These can both be true, especially if/when it has bad defaults. This is why you have things like "type the name of the database you're dropping" safety features - but you also have to name your production database something like "THE REAL DaTabaSe - FIRE ME" so you have to type that and not fall into the trap of ending up with the same name in test/development.
AI is particularly seductive because it sounds like a reasonable person has thought things out, but it's all just a giant confidence trick (that works most of the time, which makes it even more dangerous).
There were so many fundamental problems with the infrastructure even before the person gave a poor prompt to an agent.
If you're using the same API key for staging and prod--and just storing it somewhere randomly to forget about--you're setting yourself up for failure with or without AI.
AI companies are selling their products as "perfect" ("better than humans...").
I agree in part with you but I also agree that they are selling a hammer which can blow-up without notice.
Other companies also tell me their product is the best thing since sliced bread. I still try to find the flaws. That's part of my job. But suddenly with LLMs we just blindly trust the companies? I don't think you.
I don't blindly give up my brain and my agency and no one else should. It's fun and educational to play around with LLMs. Find the what they are good at. But always remember that you can't predict what it will do. So maybe don't blindly trust it.
Much like how a poor workman always blames his tools, people using poor tools always blame themselves.
I mean, Donald E Norman wrote The Philosophy of Everyday Things in the 80s!(Later became "The Design of Everyday Things")
And yet, today, we will still have a bunch of people defending Gnome's design decisions, or the latest design decisions from Apple, etc.
Except it is definitely not.
LLMs alone have highly non-deterministic even at a high-level, where they can even pursuit goals contrary to the user's prompts. Then, when introduced in ReAct-type loops and granted capabilities such as the ability to call tools then they are able to modify anything and perform all sorts of unexpected actions.
To make matters worse, nowadays models not only have the ability to call tools but also to generate code on the fly whatever ad-hoc script they want to run, which means that their capabilities are not limited to the software you have installed in your system.
This goes way beyond "regular tool" territory.
"LLMs are a tool [like every other tool]" to mean "LLMs have similar properties to other tools" — when I believe they meant "LLMs are a tool. other tools are also tools," where the operative implication of "tool" is not about scope of capabilities or how deterministic its output is (these aren't defining properties of the concept of "tool"), but the relationship between 'tool' and 'operator':
- a tool is activated with operator intent (at some point in the call-chain)
- the operator is accountable for the outcomes of activating the tool, intended or otherwise
The capabilities and the abilities of a tool to call sub-tools is only relevant insofar as expressing how much larger the scope of damage and surface area of accountability is with a new generation of tools. This is not that different than past technological leaps.
When a US bomber dropped a nuke in Hiroshima, the accountability goes up the chain to the war-time president giving the authorization to the military and air force to execute the mission — the scope of accountability of a single decision was way larger than supreme commanders had in prior wars. If the US government decides to deploy an LLM to decide who receives and who is denied healthcare coverage, social security payments, voting rights, or anything else, the head of internal affairs to authorize the use of that tool should be held accountable, non-determinism of the tool be damned.
This again is where the simplistic assumption breaks down. Just because you can claim that a person kick started something, that does not mean that person is aware and responsible for all its doing.
Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system? Because with LLMs and agents you have even less understanding and control and awareness of what they are doing.
Kick started what? If you decided to give an LLM access to your database, it's completely on you when you when it does something you don't want. You should've known better.
If all you "kickstart" is an LLM generating text that you can use however you decide, there will never be anything to worry about from the LLM.
> Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes, and it bothers me that others don't feel the same. You vetted the app, you installed the app, and you gave it permission to do whatever on your system. Of course you're responsible.
Yes. I can try to vet the app to the best of your abilities and beyond that it's a tradeoff between how likely is it to cause harm and do the benefits outweigh these harms.
Of course everyone is differently qualified to do this but my argument is more about professionals. Managers should know better than to blindly trust LLM companies. Engineers should take better care what they allow LLMs to do and what tools they give them.
There is a difference between "I couldn't have known" and "I didn't know". You can know that LLMs are not trustworthy. You couldn't have know what they do but you already knew that trusting them blindly might be bad.
You could know that giving a baby a razor blade is a bad idea. You can't know what exactly will happen but you might have a pretty good idea that it will probably be not good.
Let's not forget all the razor blade enthusiasts just screaming at you that you are using babies with razor blades wrong and that it works totally fine for them.
If I install a powerful/dangerous app, and I come under harm, I have some accountability — most of it if it's due to user error (eg: I install termux and `rm -rf /`).
If it's malware, and Google/Apple approved said app to their store which is where I got it from, when their whole value proposition for walled-garden storefronts is protecting users, then they have significant accountability.
If the app requests more permissions than necessary for stated goals, and/or intentionally harms users via misrepresentation or misdirection (malware), the app publisher should also be held accountable (by the storefront, legally, etc).
I'm also unclear what angle you are arguing: are you stating that because tools have gotten so complicated that the end user may not understand how it all works, no one should be considered responsible or held accountable? Or that the tool (currently a non-entity) itself should be held accountable somehow? Or that no one other than the distributor of the tool should be accountable?*
Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails.
That's why I don't do that.
That's a core trait of LLMs.
Even the AI companies developing frontier models felt the need to put together whole test suites purposely designed to evaluate a model's propensity to try to subvert the user's intentions.
https://www.anthropic.com/research/shade-arena-sabotage-moni...
> Giving up control is a decision.
No, it is definitely not. Only recently did frontier models started to resort to generating ad-hoc scripts as makeshift tools. They even generate scripts to apply changes to source files.
I can also just choose not to use an LLM. It is my choice to use them so it is my duty to keep myself safe. If I can't control that I'd be stupid to use them.
My take is that I probably can use LLMs safely when I don't let it run autonomously. There is a slight chance that the LLM will generate a string that will cause a bug in an MCP that will let the LLM do what it wants. That is the risk I am going to take and I will take the blame if it goes wrong.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
Quoth the author: "But I also know you can't blame a tool for your own mistakes."
Are we able to completely classify any and all AI models as tools? Or are they something more?
I don't know the answer to this question.
Since machines don't yet have the ability to take accountability, it falls on the human to do that. And organizations must enable / enforce this so they too can learn and improve.
Without that, there's a lot of dependency being pushed on the machine to (cross fingers) not make the same mistake again.
Management has doing a wonderful job of eschewing accountability for decades.
It's a lot of people's dream to be able to say, yeah, our product doesn't work, but it's not OUR fault, and the client just shrug and grumble ai ai ai, and just put up with it because they know they can't get a better service anywhere else.
It's not MY fault my website is down: it's Amazon's! It's not MY fault my app doesn't work: it's Claude Code's!
Currently, from a legal perspective, AI is considered a "tool" without legal persona. So you sue the developer, the owner, or the user of the AI. (Just kidding, any lawyer worth his/her salt will sue all three! But you get the point.)
Legally speaking, AI will probably be viewed that way for a long time. There are too many issues agitating against viewing it any other way. Owners will not give up property rights. No will to overbear. On and on and on.
Everyone thinks they have the right to judge, and use the massive amounts of available information to do so, even if they haven’t been trained to judge.
It's not about judging. We are socializing the losses to the public and capitalizing the profits for the already wealthy.
She had originally asked for $20,000 to cover medical expenses.
https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...
If instead this happened in another part of the world instead of the USA, I doubt that McDonalds would have had to pay much if anything in a similar situation.
And the point is that it seems that especially in the USA the companies are very avoidant of ever admitting fault for anything happening to their customers, for fear of lawsuits where they have to pay a lot of money to individual people.
It's not just America. McDonald's UK got involved in the UK's biggest ever libel case. https://en.wikipedia.org/wiki/McLibel_case ; leaflets distributed in 1985 ended up resulting in a human rights judgement in 2005, after a lifetime of litigation and millions spent.
Seems kind of an opposite situation. There it was McDonalds suing a pair of people, not the other way around. And the human rights violation was by the UK government and not McD.
0. https://www.nytimes.com/1992/04/24/business/mcdonald-s-net-u...
Why is it possible for you to fat-finger your way to deleting production database locally?
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
Not actually about technology at all, but about organizational structure.
Imagine two parallel universes:
- in one, you take ten minutes to make a dashboard that shows management what they asked for. It passes code review before merge and the exec who asked for it says it's what they wanted.
- in the other, you take a day or two to make it. Again, it passes code review before merge and the exec who asked for it says it's what they wanted.
Which version of you is more likely to get positive versus negative feedback? Even if the quick-to-build version isn't actually correct? If you're too slow and aren't doing enough that looks correct, you'll be held accountable. But if you're fast and do things that look correct but aren't, you won't be held accountable. You'll only be held accountable for incorrect work if the incorrectness is observed, which is rarer and rarer with fewer and fewer people directly observing anything.
So oddly, with nobody doing it on purpose, people get held accountable specifically for building things the way you're advocating.
I imagine that orgs that do lots of incorrect work could be outcompeted but won't be, because observability is hard and the "not get in trouble" move is to just not look too hard at what you're doing and move to the next ticket.
How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?
Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.
I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.
If by "now" you mean "for the past few decades", I think you've got it spot on, at least per the very interesting https://en.wikipedia.org/wiki/The_Unaccountability_Machine
That manual aged much more gracfully than the 1930s "Songs of the IBM," featuring lines like "The name of T.J. Watson means a courage none can stem / And we feel honored to be here to toast the I.B.M.," and of course classic American standards like "To G.H. Armstrong, Sales Manager, ITR and IS Divisions."
Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.
Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
LLMs are nothing else but the exact reversal of this. To go from the system of computation that Boole gave you to treating your computer like a genie you perform incantations on, it's literally sending you back to the medieval age.
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
I think my friend Jonathan Rees put it best:
More on that: https://dustycloud.org/blog/identity-is-a-katamari/This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
Meaning is more fixed than it is not.
One thing that becomes very clear from this sort of work is just how bad LLMs are. It can be invisible when you're working with them day to day, because you tend to steer them to where they are helpful. Part of game theory though is being robust. That means finding where things are bad, too, not just exploring happy paths.
To get across just how bad the failure cases of LLMs are relative to humans, I'll give the example of tic tac toe. Toddlers can play this game perfectly. LLMs though, don't merely do worse than toddlers. It is worse then that. They can lose to opponents that move randomly.
They can be just as bad as you move to more complex games. For example, they're horrible at poker. Much worse than human. Yet when you read their output, on the surface layer, it looks as if they are thinking about poker reasonably. So much so, in fact, that I've seen research efforts that were very misguided: people trying to use LLMs to understand things about bluffing and deception, despite the fact that the LLMs didn't have a good underlying model of these dynamics.
It is hard to talk about, because there are a lot of people who were stupid in the past. I remember people saying that LLMs wouldn't be able to be used for search use-cases years back and it was such a cringe take then and still is that I find myself hesitant to talk about the flaws. Yet they are there. The frontier is quite jagged. Especially if you are expecting it to be smooth, expecting something like anything close to actual competence, those jagged edges can be cutting and painful.
Its also only partially solvable through scale. Some domains have a property where, as you understand it better, the options are eliminated and constrained such that you can better think about it. Game theory, in order to reduce exploitability, explores the whole space. It defies minimization of scope. That is a problem, since we can prove that for many game theoretic contexts, the number of atoms is eclipsed by the number of unique decisions. Even if we made the model the size of our universe there would still be problems it could, in theory, be bad at.
In short, there is a practical difference between intelligence and decision management, in much the same way there is a practical difference between making purchases and accounting. And the world in which decisions are treated as seriously as they could be so much so exceeds our faculties that most people cannot even being to comprehend the complexity.
Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.
However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.
If you tell Terraform the wrong thing it will remove your database and not be accountable either.
Tools cannot eschew accountability. But the users of the tools can and that is exactly what happened in the PocketOS fiasco.
Just as a company is responsible for the actions of its junior employees, so too are users responsible for their LLMs.
"It is a poor workman who blames his tools."
We're different.
People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.
Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".
Anyone who has encountered politics, psychopaths and narcissists knows that this isn’t always true.
Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.
If you give the AI agency to execute some task, you are still responsible. In the near term we should focus on tooling for auditing and sandboxing, and human in the loop confirmations.
We can't even do this. They are worth too much money already to ever be held really accountable.
The best we can ever hope for is they might occasionally be hit with relatively insignificant "cost of doing business" fines from time to time.
Why is there a group of people always obsessed with symbolic reasoning being the only way AI can function and regularly annoy explain why humans (who are not strict symbolic reasoning machines at any level) work.
Tracebacks, debuggers, logging, etc. We put enormous resources into not only the bad case, but the potential that a bad case could occur. When something goes wrong, we want to know why, and we want to make sure that something bad like that doesn't happen again.
Also, court is unavailable in many cases now. Binding arbitration is very common now, but this would be illegal in many other places.
I am almost certain that even if you did get what you want, something that isn't what you want will run circles around you and eat your lunch
EDIT: I suspect this will be an unpopular take on Hacker News. And so I am soliciting upvotes for visibility from other biologists and sympathetic technologists. I think everyone should try to grapple with this possibility <3
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
> even if you do get [cathedral], [bazar] will run circles around you…
It's nested and recursive cathedrals and bazaars, all the way down. And perhaps the bazaar has finally arrived inside the favourite cathedral of most everyone here
EDIT: out of curiosity, does anyone have any good examples of biomes/ecosystems that are so far toward cathedrals? Or is that a uniquely human invention/extreme at the ecosystem scale?
The article proposes automation as the solution for such mistakes. But infrastructure automation tools like Terraform rely on the exact API that resulted in the database getting deleted.
IMO the biggest mistakes were:
1. Having an unrestricted API token accessible by AI. Apparently they were not aware that the token had that many permissions.
2. No deletion protection on the production database volume.
3. Deleting a volume immediately deletes all associated snapshots. Snapshot deletion should be delayed by default. I think AWS has the same unsafe default, but at least their support can restore the volume. https://alexeyondata.substack.com/p/how-i-dropped-our-produc...
AI wasn't the main issue (though it grabbing tokens from random locations is rather scary). But automation isn't the answer either, a Terraform misconfiguration could have just as easily deleted the database.
Their cloud provider needs to work on safe defaults (limited privileges and delayed snapshot deletion), and communicating more clearly (the user should notice they're creating an unrestricted token).
Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.
So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.
They likely need some combination of https://www.cloudbees.com/blog/separate-aws-production-and-d..., introduction to terraform, introduction to GitHub actions, and some sort of vm where production credentials live (and AI doesn't!)
But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.
And in both cases, the humans don't need direct access to the raw CSP API. Use a local proxy that adds more safety checks. In dev, sure, delete away.
In prod, check a bunch of things first (like, has it been used recently?). Humans do not need direct access to delete production resources (you can have a break-glass setup for exceptional emergencies).
The same people who would blame AI for their failing to properly configure permissions would also blame interns for deleting production whatever.
Blame should go up, praise should go down. People always invert these.
I’d like to rephrase this as: this is why you don’t give interns permissions to delete your prod database.
This is a process failure, not an AI failure.
I honestly don’t understand why people blame AI here, when you literally gave AI permissions to do exactly this.
It’s like blaming AWS for exposing some database to the public. That’s just not AWS’ fault. Neither is this the fault of AI.
This sounds similar to what's described in the "Claude deleted my DB post", it decided "I need to do X", then searched for whatever would let it do X, regardless of intended purpose.
So, here at least some of the blame belongs to Railway - how they organized their security, how the volume deletion deletes backups as well.
They since fixed some of these issues, so a similar mistake from someone won't be as catastrophic.
Nowadays AI code assistants are designed to execute their tools in your personal terminals using your personal credentials with access to all your personal data. See how every single AI integration extension for any IDE works.
You cannot shift blame if by design it is using your credentials for everything it does.
Are you being hyperbolic here? Of course you understand why. Most people would much rather push blame somewhere else, anywhere else, than to accept fault for themselves. Whether that's because of fear of losing job or personal reputation, the reasoning doesn't really matter.
At many serious companies, even an insider attempt to access prod could light up a dashboard somewhere, and you might get a call from IT security.
To summarise them:
1. Do not anthropomorphise AI systems.
2. Do not blindly trust the output of AI systems.
3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.
I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.
But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.
This is maddeningly difficult IMX.
"Hey tacosplosion, generate me an exploding taco image."
So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?
LLMs are non-deterministic [0]. They can't be trusted to fully follow your prompts. As such, you have to be careful about what permissions they have.
Like...I use Claude Code. I allow it to run some shell commands that only read (grep, ls, find, etc.). I will never allow it to run Python code without checking with me first. Yeah, it slows me down when I have to answer its prompt for permission to run Python, but the alternative is outright dangerous.
Compare this with any other tool, say, something as simple as `rm`. I expect that if I call `rm some.file`, it will only delete that file. If it deletes anything else, that's absolutely the fault of the tool, and I should not bear any responsibility for mistakes the tool makes as long as my input was correct.
I do not give LLMs that same latitude. LLMs operate probabilistically and have far more degrees of freedom in how they interpret and act on your input, so you hold them (and yourself) to a different standard of scrutiny and accountability.
[0] Technically, LLMs are actually completely deterministic. Run any given input through the neural network, and you'll get the exact same output [1], but that output is a list of probabilities of the next potential token. Top-k sampling, temperature, and other options essentially randomize the chosen token, making them non-deterministic in practice, though APIs will often allow you to disable all that and make them deterministic.
[1] Even this statement isn't quite true because floating point math is not associative.
Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.
Did you read the actual section at <https://susam.net/inverse-laws-of-robotics.html#non-abdicati...>? It has more nuance than what the summary alone can capture.
I didn't say that. I made a question so you could elaborate which human you were referring to.
At the end of the day it's just a big weighted graph traversal. Its output is a result of many combined probabilities. It's not deterministic and even if it was the input range is so massive that it would be impossible to comprehensively test.
You cannot possibly know an LLM will do what you command it to. It's impossible by design. LLMs are inherently unpredictable. They can still be useful, but that unpredictability needs to be accounted for to use them safely.
Exactly my point.
If the tool is inherently unpredictable AI companies should either be held accountable for any mistakes or should not sell/market their services as if they were infallible.
An ai system can't lie, and it can't deliberately ignore your directions. The current frontier class does not have a model of the world or their action -- they live in a world of words. Scolding them or arguing with them has no point other than to scramble the context window.
I do think zoomorphizing them might be useful. These poor little buggers, living as ghosts in the machine, are pretty confused sometimes, but their motives are purely autoregressive.
> "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.
Rather, that the AI was able to carry out the deletion by finding and exploiting an unintended weakness in the sandboxed staging environment, ultimately obtaining permissions that the sysadmins believed were inaccessible (my impression is that the author of the linked article didn't fully read the original post)¹
The dynamics are typical of an improperly configured sandbox environment. What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.
¹="To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on."
Claude Code made a change on March 26th to skip asking for most permissions. See this quote "Claude Code users approve 93% of permission prompts. We built classifiers to automate these decisions":
https://www.anthropic.com/engineering/claude-code-auto-mode
They had a Railway token in an unrelated file (unclear if it was a local secret) for managing custom domains. It turns out that token has full admin access to Railway.
The AI deleted a single relevant volume by id. The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.
It turns out too that Railway stores backups in the same volume.
I think that OP is exaggerating with their references to “a public API that deletes your database”.
I’d say most of the blame lies with Railway here, regardless of AI, this could have happened easily due to human error or malicious intent too.
I really don’t get the value of all these VC funded high-abstraction cloud services like Railway, Vercel, Supabase… It’s markup on top of markup. Just get a single physical server in Hetzer and it will all be so much cheaper, with a similar level of complexity and danger, and less dependent on infra built with reckless growth-at-all-costs mentality.
I was just talking to my girlfriend saying I've realised that I've not written a single line of code, nor have I debugged myself for at least the past 3 months.
Having said that, given what I've seen Claude do, I find it hard to believe that Claude would go from credential mismatch to delete the volume. I understand LLMs are probabilistic, but going from "credentials wrong" to "delete volume" is highly unlikely.
> Supabase
I don't know enough about the Railway/Vercel/Replit, but I can tell you Supabase adds a huge amount of value. The fact that I don't have to code half of things that I otherwise would is great to start something. If it's too expensive, I can implement things later once there is revenue to cover devs or time.
That said, Claude seems to have gotten a lot more careful about these kinds of things in the last couple months
That's probably not quite correct. I'd guess the snapshots are synchronized elsewhere (e.g. object storage). But the snapshots are logically owned by the volume resource, and deleting the volume deletes the associated snapshots as well. I think AWS EBS volumes behave like that as well.
But that won't take away the inability of the LLM from confusing whats in dev, whats in production, whats in localhost and whats remote; I've been working on getting a tools/skill for opencode that works with chrome/devtools via a linuxserver.io image. I can herd it to the right _arbitrary_ ports, but every compaction event steers it back to wanting to use the standard 9222 port and all that. I'm tempted to just revert it but there's a security and now, security-through-LLM-obscurity value in not using defaults. Defaults are where the LLM ends up being weak. It will always want to use the defaults. It'll always forget it's suppose to be working on a remote system.
Using opencode, there's no way to force the LLM into a protocols that limits their damage to a remote system or a narrow scope of tools. Yes, you can change permissions on various tools, but that's not the weakness that's exposed by these types of events. The weakness is the LLM is a averaged 'problem solver' so will always tend towards a use case that's not novel, and will tend to do whatever it saw on stackoverflow, even if what you wanted isn't the stackoverflow answer.
The actual "AI deleted my database" story is really more of a "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."
If removing Trunk had irrevocably deleted it from a single centralized server and also deleted any backups of it, there would have been an "SVN and the CLI destroyed our company" article back then.
As a Railway user, I appreciated that information and have changed my strategy when using them.
Yes. However, if you choose to build on their platform you bear the responsibility to understand how it works. You could have chosen a different platform, or no platform. Instead you chose Railway. Given that, it's your responsibility to know how to use it safely.
If they wanted, they could be putting in similar efforts to be more cautious and stop at the right times to ask for help.
So yeah, of course we're ultimately responsible for how we use the tools. But I definitely think it's a two way street.
To attempt an analogy, it's like table saws and sawstops. The table saw is a dangerous tool that works really well most of the time but has some failure modes that can be catastrophic. So you should learn how to use it carefully. But there is tech out there that can stop the blade in an instant and turn a lost finger into barely a nick on the skin.
We could say "The table saw didn't cut off your finger, you did" and it'd be true. But that doesn't mean we shouldn't try to find ways to keep the saw from cutting off your finger!
LLMs stopping and asking more would make them less useful. I'd much rather let an agent run for 1 hour, than it wanting my input every 15 mins, even if results are somewhat worse.
The real solution for security is a proper sandbox.
One of my AI epiphanies was the realization that when a task takes 5 minutes, it's not that it takes 5 minutes to run, it's that you're waiting in a queue.
However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous (encouraging you to think that the ones they release are safe).
Where are the warnings that "product may fail to follow instructions", and "may fail to follow safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions?
"AI can make mistakes" is a bit quaint given that LLMs sometimes completely ignore what you say, and do the exact opposite. "Yes, I deleted the database. I shouldn't have done that since you explicitly told me not to. I won't do it again." (five minutes later: does it again).
I think the API terms of use is where this would be most needed, with something a lot more explicit about the potential danger than "AI can make mistakes". We are only at the beginning of this - agentic AI - no doubt law suits will eventually determine the level of warnings that get included, and who is liable when failures occur despite product being used as recommended.
Yet the narrative was mostly not about accountability for him. If I was a dumbass and deleted prod and wrote a post about it, nobody would care. Put an AI in there and all of the sudden it’s newsworthy. Ridiculous.
I will always remember how he told me "Don't worry, it happens fairly often".
If you worked in cloud environments - every database had public facing api that can delete it.
For the rest or it - yeah, running autonomous pipelines in production which decide what to run and what not to run seems fine until it isn’t.
But every database deployed in a cloud environment has an api that can delete it. Even if you say you’re running on vms - there exist api that can delete the disk, the vm, the network config, etc.
That's a pretty nefarious edge to cut yourself on. AI has nothing to do with Railway's awful API surface here.
Decades ago we embraced POLA. What happened to basic hygiene? Sure the agent "screwed up", but it never should have had this access in the first place.
User: I tried to cut some bread and it cut my finger instead.
AI companies: not my problem!
HN: The AI didn't cut your finger, you did, idiot.
They're hired to be responsible for some part of the product.
Introducing AI doesn't remove that responsibility.
Folks tend to focus on the code and the tools they're using (maybe I'm cynical from years in the industry). I don't think your boss wants to do your job, even if they could use AI to do it. I think your boss wants to have a headcount, and he wants the headcount to be responsible for the product.
Why can you delete a network load balancer that is still getting traffic?
Why can you delete a VM that is getting non-trivial network traffic?
Why can you delete a database that has sessions / requests in the last hour?
Why can you drop a table that has queries in the last hour?
I've seen this at work the most with slow rollouts. They said it was for prod only, then it became applied to staging and dev somehow. They said you can force push in emergencies, but approximately 0 people on any given team know how to do this reliably, and it still takes way longer even in --force --now --breakglass --yesimeanit mode. So the end result is longer MTTR. It maybe prevents some kinds of outages, but also you're less likely to manually monitor a rollout when it takes longer.
The core issue is that the LLM had access to perform that action. Because it's by definition non deterministic, and you never know what it can decide to do, you need to have strict guardrails to ensure they can never do something it shouldn't. At the very least, strict access controls, ideally something more detailed that can evaluate access requests, provide just in time properly scoped access credentials, and potentially human escalation.
Sometimes it does that. And sometimes it lets you fuck things up at scale.
It just goes to show, if you're a jerk, expect to be treated like one (even by an AI model)! Be polite, people.
Now: the CEO gets paid the big bucks and has the least direct accountability, very much because it's their job to take responsibility for people more powerful than them, and likewise the CTO with major commercial software contracts like a Claude subscription. That's why this guy was so hard to take seriously: okay fine, you got burned by Anthropic, stop being a baby about it. Take responsibility for not listening to the critics.
But - to be a little more neutral about my personal distaste - I do think vibe coders are making a very similar mistake to C developers throughout the 90s, where problems with the tooling were not merely dismissed, but actively valorized.
Real Devs use buffers freely and don't make overflow errors.
Real Devs use hands-free agentic development and don't delete production databases.
"And it confessed in writing" - no, it created probabilistically token after token based on the context without any other access to what happened.
LLMs can't explain themselves in the manner relevant here, much less confess.
In D&D 3.5 edition, there was a rule about how you could "take 20" on a d20 roll to get a guaranteed 20 by taking 20 times as long in-game to perform the action, but only if it was a check that didn't have consequences for failure, since it was essentially a shortcut to skip the RNG of rolling until you rolled a 20. Maybe framing it like this might make sense to people a bit more, but if not, I'll at least have more fun making my case.
Not picking on you specifically, but in general the comments here have me wondering if AI has stolen our basic reading comprehension, or if we were always this bad.
Anyway, take “LLM user had delete permission” off your list and add “deleting the production db also deletes all the backups” to the list.
I'm happy the guy got his data back.
The issue isn't with the amount of guardrails in place to perform an action. Yes, it is obvious that there should be some in place before doing any critical operation, such as deleting a database.
The issue is that the "agent" completely disregarded instructions, which in the age of "skills" and "superpowers" seems like an important issue that should be addressed.
Considering that these tools are given access to increasingly sensitive infrastructure, allowed to make decisions autonomously, and are able to find all sorts of loopholes in order to make "progress", this disaster could happen even with more guardrails in place. Shifting the blame on the human for this incident is sweeping the real issue under the rug, and is itself irresponsible.
There are far scarier scenarios that should concern us all than losing some data.
There is currently no way to prevent this apart from not giving the LLM full control. It will not delete what it can not delete.
Use an LLM to write an ansible playbook or some terraform code if you want, but review it, test it, apply it. Keep backups (3-2-1 rule at minimum).
Letting an LLM have access to everything is just a bad idea and will lead to bad outcomes. You can not replace a person with a mind and experience with an LLM. You can try. But you will probably fail.
But deleting something is just one action you might not want it to take.
The recent "agentic" craze is fueled by the narrative pushed by companies and influencers alike that the more access given to an LLM, the more useful it becomes. I think this is ludicrous for the same reasons as you, but it is evident that most people agree with this.
We can blame users for misusing the tools, and suggest that sandboxing is the way to go, but at the end of the day most people will favor convenience over anything else a reasonable person might find important.
So at what point should we start blaming the tools, and forcing "AI" companies to fix them? I certainly hope this is done before something truly catastrophic happens.
Still if I cut off my finger with a bandsaw that is usually my fault. I didn't use tool in a safe way. People have to learn how to use their tools in a safe way. You wouldn't give an intern that much power on day one.
Plausible text sometimes is right, sometimes not.
Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.
The only good guardrail seems human-in-the-loop.
I'm getting so tired of this.
On a more technical level very serious people have voiced doubts, for example Richard Sutton in an interview with Dwarkash Patel [1].
[1] https://m.youtube.com/watch?v=21EYKqUsPfg&pp=ygUnZmF0aGVyIG9...
AI is being sold as a developer, as it is being sold as the do-everything alternative to traditional processes and methods. it is not being sold as an intern or a junior, but a real developer.
turning the tables and gaslighting devops professionals into believing the issue isnt an emerging technology with overwhelmingly heavy handed marketing and profitless operating strategy thats been shoehorned into seemingly everything and promises anything, but somehow their own oversight, will destroy whatever "vibe code" market you think you have at the cusp of a global recession.
had this AI been a real programmer chances are great they would have (intelligently) foreseen the possibility of damaging a production environment and asked for help.
to play devils advocate: you could hire a junior dev for a fourth of whatever the AI token spend is, and have likely avoided this issue entirely. sure, a greybeard is going to need to pull themselves away from some fierce sorting algorithm challenge for a second to give a wisened nod, but you would have saved yourself an inexorable amount of headache and profit loss in the longer run.
If someone left a loaded gun in a room and then let a toddler run around in it, we would be questioning why the guy 1) left the gun in the room 2) left the toddler in the room unsupervised. We wouldn't be saying, well no one should have toddlers in rooms.
> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.
"it's not a poorly designed trigger, you just had poor gun handling and discipline"