> The incident also prompted LiteLLM to make changes to its compliance processes, including shifting from controversial startup Delve to Vanta for compliance certifications.
This is pretty funny.
The leaked excel sheet with customers of Delve is basically a shortlist of targets for hackers to try now. Not that they necessarily have bad security, but you can play the odds
I am not defending Delve or anything and I hope they get what they deserver but there is no correlation between SOC2 certification and the actual cyber capability of a company. SOC2 and ISO27001 is just compliance and frankly most of it is BS.
Sure it's certainly not perfect and a lot of the documentation is something you just write for the audit and never look at it again but that's why I am saying play the odds. The average delve customer startup might be less secure that the average startup who has to justify their processes to a real auditor.
Some of it is, but things like "your stage/dev and production environments should be completely isolated from eachother" are valid and most tech companies get lazy on this front
Personally, I use them as frameworks to justify management processes.
A) I tie the cybersecurity activities to business revenue enabling outcomes (unblocked contracts), and second to reduced risk (as people react less to this when spending the buck).
B) with the political capital from point A) I actually operate a cybersecurity program, justify DevSecOps artefacts, threat modeling, incident response exercises, etc.
What this SOC2 reports, ISO27k certificates are, more like a standardization for communicating the activities of the org to outside people, and getting an external person to vet that the org doesn't bulls*t too much. but at the end of the day, the organization is responsible for keeping their house in order.
Yes they may be a BS in certain cases, however its still better than nothing. They do allow the companies to consider the questions atleast instead of claiming unawareness and most importantly it facilitates the incremental improvement.
It might feel like BS, and I'm inclined to agree with you because of the security theater aspect. (For example, Mercor had their verification done by what appears to be a legitimate audit firm.)
But it's not useless. It still forces you to go through a very useful exercise of risk modeling and preparation that you most likely won't do without a formal program.
If your goal is to maximize your posture against cyber threats, spending your time on SOC 2 compliance with Vanta (or similar) is a waste of time if you consider the amount of time spent compared to security gained.
It's incredibly easy to get SOC 2 audited and still have terrible security.
> forces you to go through a very useful exercise of risk modeling
Have you actually done this in Vanta, though? You would have to go out of your way to do it in a manner that actually adds significant value to your security posture.
(I don't think SOC/ISO are a waste of time. We do it at our company, but for reasons that have nothing to do with security)
Probably the most useful aspect of SOC2 is that it gives the technical side of the business an easy excuse for spending time and money on security, which, in startup environment is not always easy otherwise (Ie “we have to dedicate time to update our out of date dependencies, otherwise we’ll fail SOC2”).
If you do it well, a startup can go through SOC2 and use it as an opportunity to put together a reasonable cybersecurity practice. Though, yeah, one does not actually beget the other, you can also very easily get a soc2 report with minimal findings with a really bad cybersecurity practice.
That's exactly what I've done in the past. We had to be soc2 and pci dss compliant (high volume so couldn't be through saq). I wouldn't say the auditor helped much in improving our security posture but allowed me to justify some changes and improvements that did help a lot.
It doesn't force you go through risk modelling because by now most SOC2 platforms have templates you just fill in the blanks and sign off. Conversely, the auditors are paid by the company, so their incentive is to pass the audit so the client can get what it wants.
Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140
As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.
The malicious LiteLLM versions were live for 40 minutes. Wiz estimates 500,000 machines were affected. LiteLLM is present in 36% of cloud environments. Forty minutes was enough.
This is a good reminder that any tool handling sensitive data — even internal ones — needs to be transparent about where data goes. The assumption that SaaS tools protect your data is getting harder to defend.
I'll add this: Containers aren't as strong of a security boundary as VMs however this means that a successful attack now requires infection of the container AND a concurrent container-escape vulnerability. That's a really high bar, someone would need to burn a 0-day on that.
The bar right now is really, really low - blocking post-install scripts seems to be treated as "good enough" by most. Using a container-based sandbox is going to be infinitely better than not using one at all, and container-based solutions have a much easier time integrating with other tools and IDEs which is important for adoption. The usability and resource consumption trade-off that comes with VMs is pretty bad.
Just don't commit any mortal sins of container misconfigurations - don't mount the Docker socket inside the container (tempting when you're trying to build container images inside a container!), don't use --privileged, don't mount any host paths other than the project folder.
I don't think it's crazy to imagine a misconfigured production environment. I always see these same examples of how "containers aren't really secure" and they're very amateur sins to commit though, as you mention.
AFAIK a comprehensive SELinux policy (like Red Hat ships) set to enforce will also prevent quite a few file accesses or modifications from escapes.
Confusingly, Docker now has a product called "Docker Sandboxes" [1] which claims to use "microVMs" for sandboxing (separate VM per "agent"), so it's unclear to me if those rely on the same trust boundaries that traditional docker containers do (namespaces, seccomp, capabilities, etc), or if they expect the VM to be the trust boundary.
This is pretty funny.
The leaked excel sheet with customers of Delve is basically a shortlist of targets for hackers to try now. Not that they necessarily have bad security, but you can play the odds
A) I tie the cybersecurity activities to business revenue enabling outcomes (unblocked contracts), and second to reduced risk (as people react less to this when spending the buck).
B) with the political capital from point A) I actually operate a cybersecurity program, justify DevSecOps artefacts, threat modeling, incident response exercises, etc.
What this SOC2 reports, ISO27k certificates are, more like a standardization for communicating the activities of the org to outside people, and getting an external person to vet that the org doesn't bulls*t too much. but at the end of the day, the organization is responsible for keeping their house in order.
But it's not useless. It still forces you to go through a very useful exercise of risk modeling and preparation that you most likely won't do without a formal program.
It's incredibly easy to get SOC 2 audited and still have terrible security.
> forces you to go through a very useful exercise of risk modeling
Have you actually done this in Vanta, though? You would have to go out of your way to do it in a manner that actually adds significant value to your security posture.
(I don't think SOC/ISO are a waste of time. We do it at our company, but for reasons that have nothing to do with security)
If you do it well, a startup can go through SOC2 and use it as an opportunity to put together a reasonable cybersecurity practice. Though, yeah, one does not actually beget the other, you can also very easily get a soc2 report with minimal findings with a really bad cybersecurity practice.
Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140
As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.
https://cloud.google.com/blog/products/gcp/exploring-contain...
Running npm on your dev machine? Or running npm inside Docker?
I would always prefer the latter but would love to know what your approach to security is that's better than running npm inside Docker.
I definitely want to know how is it worse than running npm directly on the host
Escapes:
- privileged mode (misconfiguration, not default or common)
- excessive capabilities (same)
- CAP_SYS_ADMIN (same)
- CAP_SYS_PTRACE (same)
- DAC_READ_SEARCH (same)
- Docker socket exposure (same)
- sensitive host path mounts (same)
- CVE-2022-0847 (valid. https://www.docker.com/blog/vulnerability-alert-avoiding-dir...)
- CVE-2022-0185 (mitigated by default Docker config, requires miconfiguration of capabilities)
- CVE-2021-22555 (mitigated by default Docker config, requires miconfiguration of seccomp filters)
default seccomp filters in docker: https://docs.docker.com/engine/security/seccomp/#significant...
privileges that are dropped: https://docs.docker.com/engine/containers/run/#runtime-privi...
---
I'll add this: Containers aren't as strong of a security boundary as VMs however this means that a successful attack now requires infection of the container AND a concurrent container-escape vulnerability. That's a really high bar, someone would need to burn a 0-day on that.
The bar right now is really, really low - blocking post-install scripts seems to be treated as "good enough" by most. Using a container-based sandbox is going to be infinitely better than not using one at all, and container-based solutions have a much easier time integrating with other tools and IDEs which is important for adoption. The usability and resource consumption trade-off that comes with VMs is pretty bad.
Just don't commit any mortal sins of container misconfigurations - don't mount the Docker socket inside the container (tempting when you're trying to build container images inside a container!), don't use --privileged, don't mount any host paths other than the project folder.
AFAIK a comprehensive SELinux policy (like Red Hat ships) set to enforce will also prevent quite a few file accesses or modifications from escapes.
[1]: https://www.docker.com/products/docker-sandboxes/
Your cab see the commit history ~10% of code is written by agents.
Rest was all written by me.
Unlike other criticisms of the project, this one feels personal as it is objectively incorrect.