10 comments

  • nope1000 7 hours ago
    > The incident also prompted LiteLLM to make changes to its compliance processes, including shifting from controversial startup Delve to Vanta for compliance certifications.

    This is pretty funny.

    The leaked excel sheet with customers of Delve is basically a shortlist of targets for hackers to try now. Not that they necessarily have bad security, but you can play the odds

    • _pdp_ 5 hours ago
      I am not defending Delve or anything and I hope they get what they deserver but there is no correlation between SOC2 certification and the actual cyber capability of a company. SOC2 and ISO27001 is just compliance and frankly most of it is BS.
      • nope1000 1 hour ago
        Sure it's certainly not perfect and a lot of the documentation is something you just write for the audit and never look at it again but that's why I am saying play the odds. The average delve customer startup might be less secure that the average startup who has to justify their processes to a real auditor.
      • snapcaster 31 minutes ago
        Some of it is, but things like "your stage/dev and production environments should be completely isolated from eachother" are valid and most tech companies get lazy on this front
      • coldstartops 2 hours ago
        Personally, I use them as frameworks to justify management processes.

        A) I tie the cybersecurity activities to business revenue enabling outcomes (unblocked contracts), and second to reduced risk (as people react less to this when spending the buck).

        B) with the political capital from point A) I actually operate a cybersecurity program, justify DevSecOps artefacts, threat modeling, incident response exercises, etc.

        What this SOC2 reports, ISO27k certificates are, more like a standardization for communicating the activities of the org to outside people, and getting an external person to vet that the org doesn't bulls*t too much. but at the end of the day, the organization is responsible for keeping their house in order.

      • aitchnyu 4 hours ago
        Delve and Emdash. Are there more products or companies with similar names?
        • edgineer 3 hours ago
          Polsia (AI slop backwards)
      • sandeepkd 27 minutes ago
        Yes they may be a BS in certain cases, however its still better than nothing. They do allow the companies to consider the questions atleast instead of claiming unawareness and most importantly it facilitates the incremental improvement.
      • sebmellen 5 hours ago
        It might feel like BS, and I'm inclined to agree with you because of the security theater aspect. (For example, Mercor had their verification done by what appears to be a legitimate audit firm.)

        But it's not useless. It still forces you to go through a very useful exercise of risk modeling and preparation that you most likely won't do without a formal program.

        • cj 4 hours ago
          If your goal is to maximize your posture against cyber threats, spending your time on SOC 2 compliance with Vanta (or similar) is a waste of time if you consider the amount of time spent compared to security gained.

          It's incredibly easy to get SOC 2 audited and still have terrible security.

          > forces you to go through a very useful exercise of risk modeling

          Have you actually done this in Vanta, though? You would have to go out of your way to do it in a manner that actually adds significant value to your security posture.

          (I don't think SOC/ISO are a waste of time. We do it at our company, but for reasons that have nothing to do with security)

          • mikeocool 4 hours ago
            Probably the most useful aspect of SOC2 is that it gives the technical side of the business an easy excuse for spending time and money on security, which, in startup environment is not always easy otherwise (Ie “we have to dedicate time to update our out of date dependencies, otherwise we’ll fail SOC2”).

            If you do it well, a startup can go through SOC2 and use it as an opportunity to put together a reasonable cybersecurity practice. Though, yeah, one does not actually beget the other, you can also very easily get a soc2 report with minimal findings with a really bad cybersecurity practice.

            • sersi 54 minutes ago
              That's exactly what I've done in the past. We had to be soc2 and pci dss compliant (high volume so couldn't be through saq). I wouldn't say the auditor helped much in improving our security posture but allowed me to justify some changes and improvements that did help a lot.
        • sunir 3 hours ago
          It doesn't force you go through risk modelling because by now most SOC2 platforms have templates you just fill in the blanks and sign off. Conversely, the auditors are paid by the company, so their incentive is to pass the audit so the client can get what it wants.

          Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140

          As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.

          • gibolt 2 hours ago
            Having been through SOC2, it doesn't mean a company is rock solid, but it definitely makes the company button up loose ends, if taken seriously.
        • jacquesm 4 hours ago
          The main use of these certs is to give people that actually want to do their job a stick to hit their bosses with.
  • CafeRacer 3 hours ago
    I am genuinely wonder if anyone have had success landing gigs at Mercor.
    • tankenmate 1 hour ago
      Given their AI "hiring / onboarding" process all I can say is; couldn't have happened to a nicer company.
    • bombcar 2 hours ago
      The way to get a gig at Mercor is to hack their LLM so that it inserts you as already hired.
  • n1tro_lab 1 hour ago
    The malicious LiteLLM versions were live for 40 minutes. Wiz estimates 500,000 machines were affected. LiteLLM is present in 36% of cloud environments. Forty minutes was enough.
  • aservus 7 hours ago
    This is a good reminder that any tool handling sensitive data — even internal ones — needs to be transparent about where data goes. The assumption that SaaS tools protect your data is getting harder to defend.
    • lukewarm707 6 hours ago
      I use llms to read the privacy policies that are too long to read. They guarantee almost nothing, unless you go out of your way to get an sla
    • susupro1 1 hour ago
      [dead]
    • Serberus 4 hours ago
      [dead]
  • Adam_cipher 2 hours ago
    [dead]
  • Chepko932 5 hours ago
    [dead]
  • tazsat0512 7 hours ago
    [dead]
  • devcraft_ai 7 hours ago
    [dead]
  • techpulselab 7 hours ago
    [dead]
  • ashishb 9 hours ago
    [flagged]
    • lmc 8 hours ago
      Docker is not a strong security boundary and shouldn't be used to sandbox like this

      https://cloud.google.com/blog/products/gcp/exploring-contain...

      • ashishb 7 hours ago
        Compared to what? Which one is superior?

        Running npm on your dev machine? Or running npm inside Docker?

        I would always prefer the latter but would love to know what your approach to security is that's better than running npm inside Docker.

        • lmc 7 hours ago
          By all means, run your npm in docker, but please stop telling others it's a secure way to do so.
          • ashishb 6 hours ago
            I only said it is a defense-in-depth measure.

            I definitely want to know how is it worse than running npm directly on the host

            • habinero 6 hours ago
              Those aren't the only options, my dude.
              • ashishb 6 hours ago
                And what are good options that you use and that work on Linux as well as Mac OS?
        • lmc 7 hours ago
          • ashishb 6 hours ago
            So the worst case is that you are back to running npm on your host. Right?
          • dns_snek 5 hours ago
            99% of this is inapplicable to this discussion because it's about misconfigurations.

            Escapes:

            - privileged mode (misconfiguration, not default or common)

            - excessive capabilities (same)

            - CAP_SYS_ADMIN (same)

            - CAP_SYS_PTRACE (same)

            - DAC_READ_SEARCH (same)

            - Docker socket exposure (same)

            - sensitive host path mounts (same)

            - CVE-2022-0847 (valid. https://www.docker.com/blog/vulnerability-alert-avoiding-dir...)

            - CVE-2022-0185 (mitigated by default Docker config, requires miconfiguration of capabilities)

            - CVE-2021-22555 (mitigated by default Docker config, requires miconfiguration of seccomp filters)

            default seccomp filters in docker: https://docs.docker.com/engine/security/seccomp/#significant...

            privileges that are dropped: https://docs.docker.com/engine/containers/run/#runtime-privi...

            ---

            I'll add this: Containers aren't as strong of a security boundary as VMs however this means that a successful attack now requires infection of the container AND a concurrent container-escape vulnerability. That's a really high bar, someone would need to burn a 0-day on that.

            The bar right now is really, really low - blocking post-install scripts seems to be treated as "good enough" by most. Using a container-based sandbox is going to be infinitely better than not using one at all, and container-based solutions have a much easier time integrating with other tools and IDEs which is important for adoption. The usability and resource consumption trade-off that comes with VMs is pretty bad.

            Just don't commit any mortal sins of container misconfigurations - don't mount the Docker socket inside the container (tempting when you're trying to build container images inside a container!), don't use --privileged, don't mount any host paths other than the project folder.

            • kajman 23 minutes ago
              I don't think it's crazy to imagine a misconfigured production environment. I always see these same examples of how "containers aren't really secure" and they're very amateur sins to commit though, as you mention.

              AFAIK a comprehensive SELinux policy (like Red Hat ships) set to enforce will also prevent quite a few file accesses or modifications from escapes.

      • EE84M3i 7 hours ago
        Confusingly, Docker now has a product called "Docker Sandboxes" [1] which claims to use "microVMs" for sandboxing (separate VM per "agent"), so it's unclear to me if those rely on the same trust boundaries that traditional docker containers do (namespaces, seccomp, capabilities, etc), or if they expect the VM to be the trust boundary.

        [1]: https://www.docker.com/products/docker-sandboxes/

    • notachatbot123 8 hours ago
      [flagged]
      • ashishb 8 hours ago
        What makes you think that?

        Your cab see the commit history ~10% of code is written by agents.

        Rest was all written by me.

        Unlike other criticisms of the project, this one feels personal as it is objectively incorrect.

        • bengale 8 hours ago
          All these commenters just yell AI about every post and comment on here now. They have a worse hit rate than a blind marksman.