Don't forget about entropy! You've just created two identical copies of all of your random number generators, which could be very very bad for security.
The firecracker team wrote a very good paper about addressing this when they added snapshot support.
Good callout. We seed entropy before snapshot to unblock getrandom(), but forks still share CSPRNG state. The proper fix per Firecracker’s docs is RNDADDENTROPY + RNDRESEEDCRNG after each fork, plus reseeding userspace PRNGs like numpy separately. On the roadmap. https://github.com/firecracker-microvm/firecracker/blob/main...
Re-seeding is easy. The hard parts are (a) finding everything which needs to be reseeded -- not just explicit RNGs but also things like keys used to pick outgoing port numbers in a pseudorandom order -- and (b) making sure that all the relevant code becomes aware that it was just forked -- not necessarily trivial given that there's no standard "you just got restarted from a snapshot" signal in UNIX.
Off the cuff, the first step to ASLR is don’t publish your images and to rotate your snapshots regularly.
The old fastCGI trick is to buffer the forking by idling a half a dozen or ten copies of the process and initialize new instances in the background while the existing pool is servicing new requests. By my count we are reinventing fastCGI for at least the fourth time.
Long running tasks are less sensitive to the startup delays because we care a lot about a 4 second task taking an extra five seconds and we care much less about a 1 minute task taking 1:05. It amortizes out even in Little’s Law.
Niiiiiice, I've been working on something like this, but reducing linux boot time instead of snapshot restore time; obviously my solution doesn't work for heavy runtimes
It's so frustrating seeing all this sandbox tooling pop up for linux but windows is soooooo far behind. I mean Windows Sandbox ( https://learn.microsoft.com/en-us/windows/security/applicati... ) doesn't even have customizable networking white lists. You can turn networking on or off but that's about as fine grained as it gets. So all of us still having to write desktop windows stuff are left without a good method of easily putting our agents in a blast proof box.
Web browsers don't even work properly in Windows Sandbox. There is a bug that hasn't been patched in over a year whereby web browsers can't use the GPU to render a page so all it displays is a white page. Users have to create a configuration file that turns off vGPU and launch Windows Sandbox from that.
Nice to see this work! I experimented with this for exe.dev before we launched. The VM itself worked really well, but there was a lot of setup to get the networking functioning. And in the end, our target are use cases that don't mind a ~1-second startup time, which meant doing a clean systemd start each time was easier.
That said, I have seen several use cases where people want a VM for something minimal, like a python interpreter, and this is absolutely the sort of approach they should be using. Lot of promise here, excited to see how far you can push it!
I’ve been a big fan of “what’s the thinnest this could be” interpretations of sandboxes. This is a great example of that. On the other end of the spectrum there’s just-bash from the Vercel folks.
The tricky part of doing this in production is cloning sandboxes across nodes. You would have to snapshot the resident memory, file system (or a CoW layer on top of the rootfs), move the data across nodes, etc.
Agreed, cross-node is the hard next step. For now single-node density gets you surprisingly far. 1000 concurrent sandboxes on one $50 box. When we need multi-node, userfaultfd with remote page fetch is the likely path.
I noticed that you implemented a high-performance VM fork. However, to me, it seems like a general-purpose KVM project. Is there a reason why you say it is specialized for running AI agents?
Fair question. The fork engine itself is general purpose -- you could use it for anything that needs fast isolated execution. We say 'AI agents' because that's where the demand is right now. Every agent framework (LangChain, CrewAI, OpenAI Assistants) needs sandboxed code execution as a tool call, and the existing options (E2B, Daytona, Modal) all boot or restore a VM/container per execution. At sub-millisecond fork times, you can do things that aren't practical with 100-200ms startup: speculative parallel execution (fork 10 VMs, try 10 approaches, keep the best), treating code execution like a function call instead of an infrastructure decision, etc.
The API in the readme is live right now -- you can curl it. Plan is multi-region, custom templates with your own dependencies, and usage-based pricing. Email in my profile if you want early access.
I keep so so so many opencode windows going. I wish I had bought a better SSD, because I have so much swap space to support it all.
I keep thinking I need to see if CRIU (checkpoint restore in userspace) is going to work here. So I can put work down for longer time, be able to close & restore instances sort of on-demand.
I don't really love the idea of using VMs more, but I super love this project. Heck yes forking our processes/VMs.
CRIU is great for save/restore. The nice thing about CoW forking is it's cheap branching, not just checkpointing. You can clone a running state thousands of times at a few hundred KB each.
On tail latency: KVM VM creation is 99.5% of the fork cost - create_vm, create_irq_chip, create_vcpu, and restoring CPU state. The CoW mmap is ~4 microseconds regardless of load. P99 at 1000 concurrent is 1.3ms. The mmap CoW page faults during execution are handled transparently by the host kernel and don't contribute to fork latency.
On snapshot staleness: yes, forks inherit all internal state including RNG seeds. For dependency updates you rebuild the template (~15s). No incremental update - full re-snapshot, similar to rebuilding a Docker image.
On the memory number: 265KB is the fork overhead before any code runs. Under real workloads we measured 3.5MB for a trivial print(), ~27MB for numpy operations. But 93% of pages stay shared across forks via CoW. We measured 100 VMs each running numpy sharing 2.4GB of read-only pages with only 1.75MB private per VM. So the real comparison to E2B's ~128MB is more like 3-27MB depending on workload, with most of the runtime memory shared.
The firecracker team wrote a very good paper about addressing this when they added snapshot support.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
So that just (!) leaves userspace PRNGs.
The old fastCGI trick is to buffer the forking by idling a half a dozen or ten copies of the process and initialize new instances in the background while the existing pool is servicing new requests. By my count we are reinventing fastCGI for at least the fourth time.
Long running tasks are less sensitive to the startup delays because we care a lot about a 4 second task taking an extra five seconds and we care much less about a 1 minute task taking 1:05. It amortizes out even in Little’s Law.
That said, I have seen several use cases where people want a VM for something minimal, like a python interpreter, and this is absolutely the sort of approach they should be using. Lot of promise here, excited to see how far you can push it!
More than the sub ms startup time the 258kb of ram per VM is huge.
On Firecracker version: tested with v1.12, but the vmstate parser auto-detects offsets rather than hardcoding them, so it should work across versions.
https://codesandbox.io/blog/how-we-clone-a-running-vm-in-2-s...
Are there parallels?
https://codesandbox.io/blog/how-we-clone-a-running-vm-in-2-s...
I keep thinking I need to see if CRIU (checkpoint restore in userspace) is going to work here. So I can put work down for longer time, be able to close & restore instances sort of on-demand.
I don't really love the idea of using VMs more, but I super love this project. Heck yes forking our processes/VMs.
https://GitHub.com/jgbrwn/vibebin
On snapshot staleness: yes, forks inherit all internal state including RNG seeds. For dependency updates you rebuild the template (~15s). No incremental update - full re-snapshot, similar to rebuilding a Docker image.
On the memory number: 265KB is the fork overhead before any code runs. Under real workloads we measured 3.5MB for a trivial print(), ~27MB for numpy operations. But 93% of pages stay shared across forks via CoW. We measured 100 VMs each running numpy sharing 2.4GB of read-only pages with only 1.75MB private per VM. So the real comparison to E2B's ~128MB is more like 3-27MB depending on workload, with most of the runtime memory shared.