5 comments

  • nasretdinov 4 hours ago
    Generally when I want to run something with so much parallelism I just write a small Go program instead, and let Go's runtime handle the scheduling. It works remarkably well and there's no execve() overhead too
    • jkool702 35 minutes ago
      So, there are a few reasons why forkrun might work better than this, depending on the situation:

      1. if what you want to run is built to be called from a shell (including multi-step shell functions) and not Go. This is the main appeal of forkrun in my opinion - extreme performance without needing to rewrite anything. 2. if you are running on NUMA hardware. Forkrun deals with NUMA hardware remarkably well - it distributes work between nodes almost perfectly with almost 0 cross-node traffic.

    • cbsmith 1 hour ago
      AFAIK, the Go runtime is pretty NUMA-oblivious. The mcache helps a bit with locality of small allocations, but otherwise, you aren't going to get the same benefits (though I absolutely here you about avoiding execve overhead).
      • jkool702 30 minutes ago
        So...yes, the execve overhead is real. BUT there's still a lot you can accomplish with pure bash builtins (which don't have the execve overhead). And, if you're open to rewriting things (which would probably be required to some extent if you were to make something intended for shell to run in Go) you can port whatever you need to run into a bash builtin and bypass the execve overhead that way. In fact, doing that is EXACTLY what forkrun does, and is a big part of why it is so fast.
    • beanjuiceII 3 hours ago
      dang and u did all that without a 10 year journey
  • jkool702 4 days ago
    Hi HN,

    Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

    I hit that wall...so I built forkrun.

    forkrun is a self-tuning, drop-in replacement for GNU Parallel (and xargs -P) designed for high-frequency, low-latency shell workloads on modern and NUMA hardware (e.g., log processing, text transforms, HPC data prep pipelines).

    On my 14-core/28-thread i9-7940x it achieves:

    - 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)

    - ~95–99% CPU utilization across all 28 logical cores (vs ~6% for GNU Parallel)

    - Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)

    These benchmarks are intentionally worst-case (near-zero work per task), where dispatch overhead dominates. This is exactly the regime where GNU Parallel and similar tools struggle — and where forkrun is designed to perform.

    A few of the techniques that make this possible:

    - Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced.

    - SIMD scanning: per-node indexers use AVX2/NEON to find line boundaries at memory bandwidth and publish byte-offsets and line-counts into per-node lock-free rings.

    - Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.

    - Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.

    …and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead at every stage.

    In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec. In typical streaming workloads it's often 50×–400× faster than GNU Parallel.

    forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings).

    - Benchmarking scripts and raw results: https://github.com/jkool702/forkrun/blob/main/BENCHMARKS

    - Architecture deep-dive: https://github.com/jkool702/forkrun/blob/main/DOCS

    - Repo: https://github.com/jkool702/forkrun

    Trying it is literally two commands:

        . frun.bash    # OR  `. <(curl https://raw.githubusercontent.com/jkool702/forkrun/main/frun.bash)`
        frun shell_func_or_cmd < inputs
    
    Happy to answer questions.
    • wood_spirit 3 hours ago
      Thanks for making and thanks for sharing :)

      I’m not a parallels kind of user but I can appreciate your craft and know how rewarding these odysseys can be :)

      What was the biggest “aha” moment when you worked how things interlock or you needed to make both change an and b at the same time, as either on their own slowed it down? Etc. And what is the single biggest impacting design choice?

      And if you’re objective, what could be done to other tools to make them competitive?

      • jkool702 14 minutes ago
        So, in forkruns development there have been a few "AHA!" moments. Most of them were accompanied by a full re-write (current forkrun is v3).

        The 1st AHA, and the basis for the original forkrun, was that you could eliminate a HUGE amount of the overhead of parallelizing things in shell in you use persistent workers and have them run things for you in a loop and distribute data to them. This is why the project is called "forkrun" - its short for "first you FORK, then you RUN".

        The 2nd AHA, which spawned forkrun v2, was that you could distribute work without a central coordinator thread (which inevitably becomes the bottleneck). forkrun v2 did this by having 1 process dump data into a tmpfile on a ramdisk, then all the workers read from this file using a shared file descriptor and a lightweight pipe-based lock: write a newline into a shared anonymous pipe, read from pipe to acquire lock, write newline back to pipe to release it. FIFO naturally queues up waiters. This version actually worked really well, but it was a "serial read, parallel execute" design. Furthermore, the time it took to acquire and release a lock meant the design topped out at ~7 million lines per second. Nothing would make it faster, since that was the locking overhead.

        The 3rd AHA was that I could make a very fast (SIMD-accellerated) delimiter scanner, post the byte offsets where lines (or batches of lines) started in the global data file, and then workers could claim batches and read data in parallel, making the design fully "parallel read + parallel execute"

        The 4th AHA was regarding NUMA. it was "instead of reactively re-shuffling data between nodes, just put it on the right node to begin with". Furthermore, determine the "right node" using real-time backpressure from the nodes with a 3 chunk buffer to ensure the nodes are always fed with data. This one didn't need a rewrite, but is why forkrun scales SO WELL with NUMA.

    • brightmood 5 hours ago
      I am using a 9950x3D processor and didn't see any slow-down nor cpu sitting idle, I suggest you read the man-pages more clearly :P
      • DetroitThrow 1 hour ago
        I know this is obviously sarcasm and it made me laugh but I'm pretty sad HN couldn't catch it.
    • DetroitThrow 5 hours ago
      >Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

      Yes, to my extreme frustration. Thank you, I'm installing this right now while I read the rest of your comment.

      • jkool702 0 minutes ago
        How did it work for you?
    • esafak 5 hours ago
      Please don't support only curl for installation. There are many package registries you can use; e.g., https://github.com/aquaproj/aqua-registry
      • jkool702 5 minutes ago
        Theres no "install" - you just need to source the `frun.bash` file. Downloading frun.bash and sourcing it works just fine. directly sourcing a curl stream that grabs frun.bash from the git repo is just an alternate approach. It is not "required" by any means.
    • brightmood 5 hours ago
      Why the hell do you curl ? Additionally, why do you advertise it when you just had uploaded it? Nobody should install something that new...
  • tombert 3 hours ago
    I guess I've never really used parallel for anything that was bound by the dispatch speed of parallel itself. I've always use parallel for running stuff like ffmpeg in a folder of 200+ videos, and the speed in which parallel decides to queue up the jobs is going to be very thoroughly eaten by the cost of ffmpeg itself.

    Still, worth a shot.

    I have to ask, was this vibe-coded though? I ask because I see multiple em dashes in your description here, and a lot of no X, no Y... notation that Codex seems to be fond of.

    ETA: Not vibe coded, I see stuff from four years ago...my mistake!

  • pjoubert 3 hours ago
    [flagged]