I'm not sure that a prompt injection secure LLM is even possible anymore than a human that isn't susceptible to social engineering can exist. The issues right now are that LLMs are much more trusting than humans, and that one strategy works on a whole host of instances of the model
A big part of the problem is that prompt injections are "meta" to the models, so model based detection is potentially getting scrambled by the injection as well. You need an analytic pass to flag/redact potential injections, a well aligned model should be robust at that point.
I would hope anyone with the knowledge and interest to run OpenClaw would already be mostly aware of the risks and potential solutions canvassed in this article, but I'd probably be shocked and disappointed.
Yeah that. I had an external "security consultant" (trained monkey) tell me the other day that something fucking stupid we were doing was fine. There are many many people who should not be allowed near keyboards these days.
> Despite all advances:
> * No large language model can reliably detect prompt injections
Interesting isn't it, that we'd never say "No database manager can reliably detect SQL injections". And that the fact it is true is no problem at all.
The difference is not because SQL is secure by design. It is because chatbot agents are insecure by design.
I can't see chatbots getting parameterised querying soon. :)
Because then, the malicious web page or w/e just has skills-formatted instructions to give me your bank account password or w/e.