This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
It is a pity that you can't make an experimental commit on an experimental branch without igniting a fire of delirium through some people who -- if they were able to put their emotional response aside for a minute and could weigh this up on the basis of merit -- would probably agree with the motivations for researching this approach.
> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable
Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.
For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.
It’s annoying for the team members I suppose, but to be fair, if you’re working on a high-profile open source project, owned by one of the most hyped companies in the world, and your branches are public, it’s probably a good idea to be clear in the branch naming and supplemental files if you’re just “experimenting”.
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
I am a topic starter, and I had no emotional response, was just being curious. Never expected it will land at HN #1. I specifically posted the link to the first commit and not to the whole branch, because currently the prompt is the most interesting part.
An original topic starter? I'm pretty sure that this was originally posted on X by someone else, as I commented there, and minutes after, it was copied and put here on HN with the twisted title; the original was more of a "question, surprise tone"
This topic starter. I saw a post on Twitter in "for you" feed, verified it, found an interesting bit (rewriting prompt) and started a topic on HN. Like I said, I never expected it to hit #1.
The branch name is "claude/phase-a-port", there was zero indication this was an experiment until Jarred commented. The more accurate title might have simply been "there is a branch in the official repo of bun describing a port to rust from zig". No amount of soft titles would have prevented the discussion. People have their opinions about Bun, about Zig, about Rust and it's all going to come out in a discussion board.
Can’t every branch be considered an experiment? I have a ton of experimental branches that I don’t label «experimental». One of the reasons you use git…
Sure, but then how does it change anything around the discussion? You are still running an experiment to port to Rust, it still gets posted, the Rust-heads and Zig-heads still make their comments.
> there was zero indication this was an experiment
The goal of Phase A is a **draft** `.rs` next to the `.zig`
that captures the logic faithfully — it does **not** need to compile. Phase B
makes it compile crate-by-crate.
I mean, it would be hard to spell it out any clearer than that! Code that fails to compile is just not very useful for real work.
Phase B clearly says compilation is the next goal. The first goal is to get a like for like logic, the second goal is to get it to compile. Can you guess what the third goal will be? Throw out the code?
Yes, but that would require people to read past the title. You can't get a proper knee-jerk first post in if you do that! Completely unfair to expect people to make that sacrifice/effort.
[there was some sarcasm there, BTW, if anyone has a faulty detector that didn't pick up on it]
I couldn't use that title because I didn't know if it an experiment at the moment. Even now the correct title would be "Bun author says that he is entertaining the idea of porting it from Zig to Rust, creates an experimental branch".
The fact someone who works on Bun is willing to create and even push a branch generated by a stochastic parrot is very telling of the direction the project is going.
Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.
Why are you treating branches as if they are holy? This is all OSS, people work on this in their free time, git is got and people can use branches as they like to experiment and share their experiments with others. If you don't like the code, don't use it you damn leech.
Underplaying AI, overselling what an experimental branch is, and suggesting it's representative of the entire project, all while suggesting people shouldn't even consider new tools and methodologies. Where to start.
I think that was a very constructive comment about the unconstructive way people are shoe-horning other concerns about bun into this thread abut a specific aspect which itself turns out to be just an experiment that someone knee-jerk reacted to, despite several active threads already discussing those matters one of which only just fell off the front page.
While the concerns many have about Bun's potential future direction are valid IMO, of the posts on this thread the one you are criticising is one of the more constructive.
I love your work on bun. How do you feel about all the constant concerns being raised about the quality of the project lately? I understand some of them might just be typical twitter hate but some of them are real. And I think people are right to question why you are adding image processing or web views inside a javascript runtime when there are bugs affecting production that sit unaddressed. For example on of our biggest blockers right now is https://github.com/oven-sh/bun/issues/6608 which was reported in 2023, still affecting us 3 years later.
When you start getting hate, you’ve made it. Up until then you’re a hypothetical that people like. Maybe they’ve built a side project with you or read the docs. You only get hate when people have used your tool and butted up against limitations. We saw this with Deno too where they went from beloved potential savior to realistic, limited tool. Hate is good. It means people rely on you
Do you know which project gets the most hate? Nodejs, so in that sense, Nodejs has made it and it is widely deployed but this hate was the reason that two seperate alternatives for Node have emerged as Deno and bun.
Recently Bun's latest version had memory leaks which crashed production code from my understanding and their attitude[0] of saying OSS will have no human contribution allowed, now doing these ports of zig to rust, going back for years what the decision making of using zig was and this code basically being vibed as there is no way that they are reviewing the code while being VC funded/bought by anthropic.
These are all genuine issues which cause hate. You can say people are hating because people rely on it but the true thing is that also seems like a bait and switch and that people switched from node.js to bun (maybe even being locked inside bun), only for them to do these highly questionable decisions which is the reason why people are starting to hate on bun.
Atleast that's my interpretation right now reading this whole thread.
Well yeah, it's in Zig, not a memory-safe language, so of course I'd expect memory leaks. That's why I haven't seriously used bun and instead use a runtime that actually is in a memory-safe language, Deno in Rust. It's like wearing roller skates without brakes and wondering why you keep running into things.
This is getting stupid. Now one can’t even make a reasonable polite question with praise without being asked if they pay.
Bun raised millions of dollars and was acquired by a commercial entity which bragged in the same blog post of reaching $1B. They’re not a guy with an eyepatch and a tin can out on the street.
Open-source developers should be compensated, but they don’t have to be. You can’t reasonably offer your work for free then complain someone isn’t paying you. If you want to be paid, charge for it.
Signed: A long time open-source developer who has dedicated years of full-time work to useful projects without compensation or raising VC money or being acquired.
Come on, whenever a project is discussed on hackernews, there is always one comment of "why are you working on X, when you should be fixing bug Y?!".
We are all software engineers on here (or at least many of us are), we all know how project management and prioritisation works right? We can't work on everything all at once.
> Come on, whenever a project is discussed on hackernews, there is always one comment of "why are you working on X, when you should be fixing bug Y?!".
That is not what the question is about, which you’ll see if you engage with it properly in good faith. There is a single question in the comment (indicated, as one does in English, by a question mark):
> How do you feel about all the constant concerns being raised about the quality of the project lately?
Everything else is context and opinion to explain the question.
given the alleged context, X being something "reported in 2023, still affecting us 3 years later", is this not a reasonable PM / priority decision to question?
Are you being ironic or serious? I can see both pros (encourage people to see themselves as customers) and cons (less initial adoption) to the licensing, although I'd maybe leave bug issues open for everybody.
With AI agents and how good they are in doing "language translation" tasks against an identical target with a comprehensive test suite, you end up doing these things out of curiosity. The AI agent has the originals to test it's assumptions with too.
I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.
Totally agreed... It enables you to try swapping out dependencies you might not otherwise even consider because of the cognitive load in trying to do so as an individual, and get it done/working in a few hours and a few days to follow in order to review.
Or take on an additional/related feature (like Redis grepping over the new array data types). Because you can be relatively sure the borders are stable and you can limit the surface/scope.
Thank you, Jarred, for your work. It’s unfortunate to see so much backlash toward legitimate research. Bun is often seen by some as “the flagship project for zig” - especially among those frustrated with rust who want zig to "win over rust" for whatever reasons. At the end of the day, you should do what makes the most sense for your project and your circumstances, regardless of the language or tools involved.
Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.
I can only speak for myself... but I've found at least Claude Opus to handle Rust very well, and in my own use cases WebAssembly (wasm) and FFI for interoperation with TS/JS has been pretty smooth.
the is lovely, how admirable that you have the space to do this. its very rare that we as a community take the time to actually implement a non trivial system in X and Y and look at the differences. so much discussion around these things is based on pointless tribalism.
I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.
While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.
Will you have a way to measure the ecological impact it has to make such a throw away attempt?
Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.
But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.
So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.
Less than the impact of people who can't be bothered to remember basic historical facts or directions in terms of hitting Google services dozens of times a day across the population.
Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.
If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.
> Showing 1,808 changed files with 790,916 additions and 151 deletions.
Just looking at the git diff [0].
I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?
If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?
Advice for the future: experiments should be explicitly tagged as such. The commit message "docs: add Phase-A porting guide" says nothing about the experimental and looks like a planned move to rust. That message certainly looks very official to me.
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.
I didn't get the impression that anyone cares about the source or destination language. I think the concern is centered around the long history of failure with large scale rewrites like this-- See Netscape 5, Perl 5, etc. Joel Spolsky wrote a legendary article about this [0]. I think the NextJS app router might be slowly joining this conversation as well.
It could get even worse if they get Second System Syndrome[1] and try to add features as they rewrite it. Considering Bun's rapid development cycle, this seems likely.
Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.
A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.
It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.
> Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.
The majority of the community feels this way which says something. The author's reaction is to publicly display being upset and dismissive of the communities reaction. That is just making it worse.
When you work on a project this big, more care is needed. The commit was an innocent mistake. The blunder is blowing off the communities response as overblown which it would be had the commit been tagged experimental. But it wasn't. And the author did themselves no favor blowing it off.
If the author was smart, their reply would simply have been:
Hello, To clarify, this is an experimental branch only. There are no plans to port, only experiment. I will tag the repo as such to ensure people understand its intention and avoid future misunderstandings.
I think the criticism is still a valid to an extent because I don't see how this would give you a good way to evaluate Zig vs. Rust. Maybe a better approach is to migrate a particularly problematic space and bench that on its own?
It's not like OP asked for any criticism to start with, right? This whole thread is pretty good example of why saying "Fools and children should never see half-finished work" exists. ¯\_(ツ)_/¯
I can say from expertise that vibing a full move of any project from one language to another is probably not a great way to evaluate if the decision is a good one. I got downvoted, maybe I said it too authoritatively. But hey, that is just like, my experienced opinion, man.
Most of Bun’s code is already written by LLMs. If you feel that way, it’s already been too late for a while. Furthermore, we’re talking about a million line port done in a couple of days. The question of whether it’s worth the time looks extremely different if done by hand. It would take a year.
The "too late" argument isn't gonna fly with someone like me who has both the time and energy to own a Javascript runtime. Heck, I'm quickly becoming the most prolific author of the ES spec too.
From what I'm reading, it's too late for Bun. I hear the whole dev stream is slop now. It was nice while it lasted, but that's not a foundation to build rock-solid stuff on top of. Not for me, not for them, not for anyone.
Interesting to see this when the current top post on HN is someone worrying about Bun as it was acquired by Anthropic. The top comment there describes “Anthropic does experiments on their own codebase, the Bun team is not gonna do the same vibe coding experiments”.
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule. Whether you think this response is trying to put pressure on zig or whether they're just moving for practical reasons is up to you.
I don't see why they think it would work when the reason their patch set was rejected was because it was not correct, did not go in a direction the Zig authors were interested in and is also in an area where they are already working hard on improvements. It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged. Compilation speed is one of the current main focuses of Zig and changing the type system to make that possible was a big part of 0.16.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Side note, but I think using LLMs like this to write PoCs in existing projects is actually a good idea to prove whatever you had in mind is feasible and worth it to pour time into. Obviously you need to not vibecode the entire thing once you're past that point though...
I think they do. Building bun is a complex task and engineers who can do that should also be able to figure out how to help out with a compiler. It is just a matter of immersing yourself in the code and be willing to put in the hours and hard work. Sure, they may not be able to help out with designing the type resolution but there is other work which needs to be done that any skilled engineer can do.
Submitting patches that are correct and match the project's desired standards¹ is joining forces and helping out.
--------
[1] And align with the project's direction. This part is of course much more subjective so could very easily be an honest misunderstanding of the situation.
Compiling Rust is actually quite fast in my experience. The problem with many Rust projects is that they pull in dependencies left, right, and center. Pulling in Tokio makes your project compile an entire thread management system even if you're just compiling Hello World, and simple oneliners containing macros can easily spread out into dozens of lines of code each.
Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.
On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.
What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.
>Compiling Rust is actually quite fast in my experience
I guess it's all relative.
I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.
I'd put it at a bit better than Haskell, but honestly not by much.
I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.
As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).
Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)
That's true, but then there's also the case of working on the zig compiler which is roughly a million loc, and with `--watch -fincremental` you can get 200ms recompile even if you change some of the most called function. Meanwhile even a 5k-10k rust project can take a 30s to recompile on minor changes. So the impact on velocity can be quite high, I love both languages, but the Zig compiler is undeniably faster than the Rust compiler and by multiple orders of magnitudes.
Rust also has incremental compiling and is pretty fast, I haven't experienced 30 second compile times when using cargo watch. See also, cranelift, which is supposed to make compile times even faster.
Makes me wonder why zig announced the strict LLM rule recently. I'm afraid one reason could be that zig doesn't want to accept code from the bun fork in the first place (because of LLM usage, deviation and other reasons)
One non-obvious reason is that an important aspect of their community is to shepherd new contributors [1]. LLMs crushing everything would reduce that. More obvious is all the toil for maintainers dealing with LLM PRs (broadly it’s an issue). The Zig maintainers prefer to put their energy into improving people and fostering those relationship.
It's important that developers have an accurate mental model of how things work, are structured and why.
LLMs promote a decoupling of mental models and the actual codebase.
As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.
Exactly. This is what many ai-sloppers ignore. Mental models are crucial. Nothing substitutes for having the program itself in your brain and being able to "mentally debug" it when something breaks.
Well said! I don't think either party is really at fault here, but if Anthropic wanted to contribute non-negligible amounts of code over time then it's an absolute dealbreaker.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
Yeah, I remember when the lazy bastards started writing programs using compilers instead of learning assembly language. Now I don’t have a single colleague who can write assembly. There’s whole generations now who can’t code assembly. Most don’t even know what a register is. Hope Zig holds against this latest attempt to make everyone stupid.
To add to the other commenters, loads of people don’t know assembly, which speaks to the quality of the average developer. The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.
I'd be very surprised if the "average" developer across the board was in fact not just a JavaScript / TypeScript only developer. I have no expectations or really even hope that the average developer I work with has ever written a line of assembly.
>The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
Knowing assembly in this day and age is the result of being curious and wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
> wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.
It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.
Knowing assembly doesn’t mean you would spend your time writing assembly (aka being familiar with opcodes and architecture optimizations). But in the process, you get familiar with the working of the computer hardware and the OS that sits on top of it. That is always useful knowledge especially when needing to deal with binary format and protocols or FFI.
Then it's sufficient to know assembly, but not necessary.
This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".
That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.
I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.
So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?
They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.
Why does it matter if a human or a machine is responsible for turning the prompt into code?
If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.
The JavaScript developers are checking in JavaScript code that they ostensibly understand. That is not the same as prompting an LLM to generate Zig that they don't understand, and expecting someone to merge it.
That's a solid reason to keep LLMs away from the kind of tasks that help with onboarding. But a patch series from a competent team that changes 3000 lines should probably be evaluated on its own merits. Or at least, the collaboration-based reasons to reject AI don't apply and the real reason would be something else.
(Though I don't know if this particular patch series would get accepted on its own merits.)
The recent article explained the bun patch would have been refused on technical merits as it's intrinsically incorrect, to be able to work properly it required some language changes.
I don't understand your suggestion. If you take an ugly patch series that changes 3000 lines and organize it into small quality changes, it's still a patch series that changes 3000 lines.
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
There are other reasons why a project like Zig might not want to accept LLM generated contributions.
Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.
There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.
It might be one of the reasons they want to migrate to Rust, i.e. to handle many these memory related issues by the compiler.
Personally I used bun on a very few personal instances. But if you check issue reports, you will see memory bugs being reported say more than deno.
> I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.
PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.
> move fast and break things and move at a pace that guarantees everything is rock solid.
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
It's a combination of pragmatism (not wanting to wade through slop, not wanting to shove out newbie developers) and politics (usual contemporary techie progressive stuff that's now oddly anti-technology).
The Zig maintainers did a pretty in-depth review of the PR, and laid out multiple technical reasons for why it would not get merged. They did not reject it simply for being vibe-coded (though that is likely the cause of it sucking).
Rust is a significantly more mature language. Adoption of zig has to be done on the assumption that the language will significantly improve as your project evolves, and if those improvements don't agree with your project's goals you're in something of a lurch. Rust is basically finished and adopting it has to be done on the assumption it won't change very much. I don't know what their initial logic for adopting zig was, but I think porting to a more mature language was inevitable, unless by some miracle zig happened to rapidly mature in exactly the direction they wanted,
I was hoping bash because why not. It's AI that has to work and maintain anyway and Anthropic employees aren't limited by 5 hour 7 days limits anyway I suppose.
You missed the part were everyone is going to run its own vibe coded assembly tools[1].
So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.
Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.
Rust is legit one of the best languages to "vibe code" in.
The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.
This could be a subjective feeling with no real data to back it up.
People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.
The thing is that this argument doesn't work with Go because its type system (and the whole language, really) is much less expressive and compiler gives a lot less feedback to the LLM. So it tends to have to write more unit tests and do more cycles of testing (and spend more tokens) to get it right.
The argument about type system is absurd anyway. The types in a program aren't a universal vocabulary that the LLM would already know about like the words of English language. They are unique to each program and domain so an LLM can't be better at it.
Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.
The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.
Studies have shown that natural human languages are all more or less equally expressive in terms of bits per second while speaking. There's lots of different ways they can be structured but they tend to follow common rules that have been well-characterized by linguists. They can be used to describe formal mathematical statements, but are not rigorously formal languages themselves.
Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.
The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.
I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.
How good are LLMs at understanding Haskell errors and then dealing with them?
The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.
As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.
Wow. That xkcd was written in 2007, and part of the dialog is "didn't that [meme] die like five years ago?" Which means All Your Base, as a meme, was already getting somewhat stale by around 2002. It's hard to believe it's been that long.
Yeah, now that I think about it, having a major project written in a language that doesn't accept AI contributions now owned by a major AI company was a recipe for dis... er, conflict.
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
Read the previous discussions on the topic. Your summary is a sensationalist lie, since their change was apparently a smoking pile of hot garbage, and Zig already had similar performance gains in a newer release.
No. The Rust project developers are more lenient when it comes to developing patches with AI assistance, but the amount of leniency one receives is proportional to the amount of pre-existing trust a contributor has with the project, and every PR still has to be reviewed by an independent human. A stranger dumping a zillion lines of slop in a PR is a one-way ticket to having your PR politely closed.
Probably moreso going with the native language that is reliable and battle tested. Rust runs on Firefox, and in production at several systems across major orgs, this is not surprising.
> what looks like a massive undertaking for vibe coding
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
> I suspect it's less of an undertaking than you may think... having an existing codebase you can reference prevents a lot of the problems you have with vibecoding.
Yeah, it's a distinction worth making, and the language for making it kind of sucks. Vibe coding means "AI does the whole thing", or "I use tab autocomplete" depending on who you ask. It's not a very useful term anymore, we need better ones.
My benchmark is basically, "are you letting the AI drive."
In this case, an AI appears to have written the migration guide...
It was and is a perfectly good term, but people started using it without regard for its definition. I don't know why people wouldn't misuse a "better" term the same way.
In this case I think the current zeitgeist (at least among zoomers and younger millennials) really loves the word "vibe". Once they hear of the term "vibe coding", they just want to be able to say it, even if what they're doing isn't really vibe coding.
And then that leaks outside their social and age groups, because other people hear the incorrect usage, get confused, and incorporate that confusion into their own use of the term.
You are right but recently, vibe coding has become a demeaning term for AI assisted code by anti-AI people. It’s interesting seeing how words evolve very quickly on the internet as they spread to different demographics.
Just going off vibes and not even looking at the code was the original definition. But "different people say the same thing but mean different things" is kind of the problem I was getting at.
I do not know if there's any overlap between these teams, but it seems like Anthropic itself is fairly invested in the Rust ecosystem.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
Zig is a moving target. 0.15 -> 0.16 includes some massive structural changes concerning IO and async/threading.
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
> it requires a hands-on approach and actually understanding what's being built.
I think this is true regardless of what language you’re using.
I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.
The code may run but it's rarely idiomatic. For example they almost never define functions inside the struct/union/enum namespace unless it already exists and follows that style, i.e. I expect "foo.bar()" but they make it "FooMod.bar(foo)".
Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
You're already at a disadvantage having to stuff the context and spend extra tokens coercing the model in the correct direction compared to it already knowing what to do (rust, ts, go, etc.)
Here, I just did a quick test with claude.
1. "make a simple tcp echo server that uses rust"
compiles and runs - took a few seconds to generate.
2. "make a simple tcp echo server that uses zig"
result: compile error, took literal minutes of spinning and thinking to generate
response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."
/opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it
@compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. "make a simple tcp echo server that uses zig 0.16"
result: compile error:
zig build-exe main.zig
main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal'
const io = init.io;
^~
4. "make a simple tcp echo server that uses zig 0.15"
result: compile error
zig build-exe main.zig
/nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it
@compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.
i mean, if zig is doing its best (inadvertently) at shooing off slop jockeys, then i already have more confidence that:
1. the language and stdlib are written by people who know what they're doing
2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about
The agents will churn their way through the errors. The new users whose learning material is out of date, as well as the existing users that have an insurmountable task in updating their code, will give up instead.
I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.
If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.
LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.
This hasn’t been true for some months. Claude has gotten better about adding latest versions of crates, and when it does encounter a breaking change from what it expects it is usually very quick about finding the change in the docs or crate source code.
What you are talking about used to be a pain point, but is now pretty much gone.
Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.
I wouldn't call any port "prudent". In general, taking mature software and doing any major rewrite is one of the riskiest thing you can do. It is a large scale attempt to fix what isn't broken.
Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.
I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that work.
There is like 1,713 open PR's on the Bun repo. I'm assuming all are from Claude or robobun?. I guess this gives us an insight on what the claude-code workflow look likes. Crazy times.
> Most are created autonomously by @robobun, checked for duplicates with a GitHub action (powered by Claude), reviewed by @coderabbitai and @claude. Meanwhile the CI is broken and @robobun finally closes a portion of its own PRs because they duplicate other PRs it has written. (Merging into main is still done by a human.)
What a weird take. I do a ton of OSS, and the act of writing code is what makes it fun for me. If I were forced to use an LLM to write all my OSS code, I would just not do it anymore.
Jesus christ, This is the thing which should be talked about more. What an abysmally bad take. This actually makes me worry about the faith of bun more than any other thing discussed here.
> I expect OSS to go the opposite direction: no human contribution allowed.
How is it an incorrect interpretation? Jared is indeed pitching/suggesting/predicting that human contribution will not be allowed in the near future, i.e. banned.
"Pitching" generally means that the person making the pitch is endorsing and pushing for it. (This might also be a regional word meaning/usage difference type thing.)
The person upthread should have said "predicting".
Zig has some advantages for such projects, especially in the beginning.
Among them:
- much easier to iterate on (due to the language being simpler and compilation much faster)
- native C/C++ interops (Zig can compile C and C++ and mix it with Zig) which is crucial for a node-replacement runtime that runs an open source JS engine
- fewer dependencies and trivial static linking
I guess that now that they've been acquired by Anthropic there's this combination of having both in-house Rust talent, AI which does better on Rust, and the funding and resources necessary to undertake such a migration.
Also: Anthropic bought Bun to not depend on node.js. But now they are dependent on Zig which is a moving target and is hostile to them because not accepting their contributions.
Anthropic makes claude, claude can write Rust like a champ and struggles at Zig. It's a straightforward "training data" argument.
I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.
Claude doesn’t write Rust like a champ. It’s still miles ahead at js and python than it is at rust. It can do macros and single file optimizations but its gotten really stuck in type hell and tried to dyn everything on multiple occasions for me.
Claude struggling at Rust: not getting types correct, using the wrong abstractions, not implementing things correctly
Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.
It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS
I'm reminded of the old joke "how to shoot yourself in the foot in 25 different languages". The first one was "C - you shoot yourself in the foot." Zig remains very close to that philosophy.
So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.
100%.
For many people, Bun is the only reason they've even heard of Zig. I'm not in a position to comment intelligently on comparative language features per se, but when it comes to mindshare and community size, Rust is a clear winner.
alt runtimes are still pretty niche, but deno and bun do have some degree of adoption. For Bun, the runtime is actually sometimes perceived as unwanted baggage, (eg a consulting client of mine wanted to pursue bun for its build tooling but had no interest in changing the runtime). IMHO, node (with Vite and PNPM) is the right call for the vast majority.
I would expect all LLMs are going to be better at Rust than Zig - a strong, thorough compiler will simply prevent more mistakes, and the benefits of a "simple" language decreases the larger the code base gets. The more abstractions exist, the less valuable "no hidden control flow" or "no hidden allocations" from the standard library get, and that's before you add the mother of all abstractions of vibe coding.
They do work well. But I still see the occasional type related issue or bug from refactoring that claude will introduce into javascript and python code. It seems to be happening less and less frequently as the models get better. But, the rust compiler catches real bugs in LLM code. I consider that a win.
Has anyone made any cross language benchmarks for LLMs? I wonder if rust's conceptual complexity makes it harder for LLMs to write? If all you care about is working software, which language is best for LLMs? Python, because there's more example code? Go or Java, because they're simpler languages? Ruby because its terse? Rust because of the compiler? I'd love to see a comparison!
But why should they? This just seems like the groundwork for an initial refactor and moving from one language to another. They haven't actually committed to switching from Zig to Rust yet. I mean, I get if you are an investor and you want to see if they are using their time effectively, but why would it matter to anyone else?
They’re not required to do so, but like I said, it would be nice, because it removes a lot of speculation. And development is in the open, so people notice what they’re doing.
Lots of people, me included, heavily invested their time and expertise into Bun, using it as a daily driver, to bundle production code or even using it in production as a JS/TS runtime. Of course, we are interested in Bun to stay a useful tool. The Anthropic acquisition was worrying enough on its own.
But there isn't any change in someone's expertise in Bun though, currently, just in development. Why would they have to dive you into a daily stand-up about their development process?
Bun may become unusable after Antropic meddling with it. In that case the expertise would be wasted. It's not a great deal for most of users, but still.
To be fair, this seems to be Buns original creator themselves experimenting. Unclear if there's any relation to the Anthropic acquisition. But I think it's best we refrain from prematurely speculating if we just don't know.
anthropic just wanted to "codex" like bragging rights of codex being developed in rust. so they are now going to write bun in rust, and then claudecode can use claim to be built on rust.
Honestly, this kind of thing seems to work quite well with vibe coding. If I remember correctly, the Ladybird JS engine was "vibe-ported" to Rust as well, and it passed 100% of the original test suite, in addition to new Rust tests.
The definition is at https://x.com/karpathy/status/1886192184808149383 and no that does not match what is in the branch. Systemically migrating a code base using an LLM does not match the defintion of vibe coding.
> I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
Then "vibe coding" is a useless term, if it just means "LLM-assisted coding". We might as well just say "LLM-assisted coding" or "AI coding" or whatever.
As much as I find the word "vibe" generally annoying (in all contexts), I actually really like "vibe coding" as "LLM did everything and I didn't even look at it". It's a succinct, useful way to describe that mode of doing things. Diluting it down to "LLM-assisted coding" makes it useless.
Nah, I'm not big on these "it either matches the way ___ used it or it's useless" binaries. The term is the term, it's recent, and people are using various forms of the others you mentioned. People use it loosely, people use it specifically, this is the way for many colloquial terms, and definitions form around them and expand over time or change.
It sort of surprises me how uptight people are getting about a term that was mentioned on X last year and has since been tossed around to loosely imply that a machine did between zero and all of the work. Just because it doesn't match exactly does not mean it's useless, it maps to a concept, if the details are important and ambiguous, then elaborate.
All language is "coined terms". The point is that if you dilute the definition of a term, you make the term useless. Evolution of a term isn't done automatically. Correcting terms such as these pushed the evolution in a more useful way. Also, evolution of language is not a magic spell that automatically forgives people on making language mistakes.
I think the definition of vibe coding is a bit fluid, in this case I just meant it to be “code fully generated by AI, possibly not fully reviewed by human eyes”. I agree that this definitely not “coding based purely off vibes”, and the approach looks legit.
The question isn't whether or not you'd get the same line count with a non-LLM tool. The question of whether or not it's vibe-coded depends on whether or not the committer actually reviewed and understood the new code. And with a 75k line difference, that seems unlikely.
It depends on what you mean by "vibe coding". Is AI coding based on an existing implementation vibe coding? What about only from a natural-language spec? How does manual reviewing affect whether or not it's vibe coding?
> How does manual reviewing affect whether or not it's vibe coding?
I think the most commonly-accepted definition of "vibe coding" is when you "forget that the (generated) code even exists"[0]. So vibe-ness entirely hinges upon whether you're manually reviewing. If you make/prompt changes based on what you observe in the generated code (rather than only based on runtime behavior), then you're not "vibe coding".
I think the other things you mentioned are orthogonal to vibe-ness.
In practice all use of AI rapidly becomes vibe coding. Even if someone says they're going to carefully manually review everything that's generated, within a couple of days they get bored and just click approve.
This is just a matter of priorities - I use LLMs to write code every day and I have never put a single line of code up for review that I didn’t read and understand.
I use to do this and then do test manually to validate everything works as expected in my small open source project. But then over the time I saw that some bugs crept in which I was unable track since I was doing manual testing. So I wrote some e2e tests with playwright and I think that gives a bit relief (at least).
Porting from one typed language to another seems like a perfect use for LLMs. I can see the appeal of both languages and why to consider such an action (e.g., rust is a mainstream PL vs zig's cult status (no slight intended)).
I think the big difficulty here is that Rust's ownership model in particular tends to require certain kinds of control flow to avoid a bunch of weird churning/copying, which makes it not as straightforward of a port target from other imperative languages.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
This is all wild conjecture, but I'd assume that teaching the LLM to do that mapping is an achievable goal and then it get's close to automatic -- effectively slurp the source AST into a rust AST and render.
My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).
It really isn't if you don't have the right abstractions.
Especially for memory management the right and wrong abstractions in Rust can lead to a factor of 5 or 10 extra amount of difficulty. The right memory management abstraction and your code can be a straight line port (or even cleaner!), the wrong one and you're going to just be spending a lot of tokens to have a machine spin around in circles trying to untie itself
GC'd languages don't have this problem, though obviously you can still generate stupid amount of pain for yourself by doing something wrong
I'm porting a large-ish delphi application to c sharp. It's been pretty hands-off except for converting to async and some language capability mismatch.
Interesting how times have changed. Back in 2015, the entire Go runtime (already a mature codebase) was rewritten from C to Go semi-automatically: one of the maintainers wrote a C-to-Go conversion tool (for a subset of C they used) so that it compiled and produced identical output, and then the resulting code was manually refactored to make the Go code more idiomatic and optimized. And now you can just ask a language model.
The big difference here is that the C-to-Go tool was presumably deterministic: running it over and over again should produce the exact same result. You can trust that result because the human wrote the conversion tool, understood it, tested it, and worked the bugs out.
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
Perhaps a viable approach might be to vibe code the translation tool itself and observe that for every input it gives the expected output. Then once the translation is done, the translation tool can be discarded.
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
Why does the deterministic nature matter? The interesting part is having oracle tests, not determinism. If someone is deterministic and wrong you use oracle tests to catch that.
People keep saying "deterministic" when they mean "probabilistic". For illustration, a bloom filter is deterministic, but it's also probabilistic. LLMs are the same.
So far the wonders of claude/codex have been mostly constrained to applications that are built within the boundary conditions of existing libraries -- the models make direct use of the good work that humans have done to date to build Python, `requests`, `ffmpeg`, you name it.
But I'm excited for the (I think inevitable) stage where the shoggoth starts to reach outside those constraints -- rewriting, patching, renaming, rebuilding libraries, DLLs, binaries -- and we move into a regime where the libraries dissolve, the application floats on top of the shifting sands of an ever more efficient, secure, unified and totally inhuman technology stack.
Obviously this is a horrifying idea in some ways (interpretability, security etc), but it's also not obvious to me that it can't work, especially if there are dedicated, centralized efforts to do this. it's also not clear that interpretability is necessarily mutually exclusive with full slopification/machine rewrite of decades of foundational, incremental development
Linked commit is probably not the most convincing for this tagline. Here's a branch[0] of Claude mass rewriting Zig code into Rust which is currently at 773,950 additions and 151 deletions:
Yikes. When Jarred left Stripe for the first time, he left behind multiple 10k+ line PRs rewriting code in the dashboard (this is before LLMs). It took months to work through those. A three quarter million line diff is essentially unreviewable.
I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.
this is an interesting idea and i might try it with something smaller. there are more than 15,000 commits to bun, so you’d have to have some sort of way to operate on groups of commits in one prompt to get that done without thousands and thousands of api requests
most unsafe language to rust transpilations produce not just pretty terrible rust code but also use unsafe everywhere
which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.
anyways that means segfaults likely would stay segfaults in the initial transpilled version
Given the recent gripe that Bun/Anthropic indicated regarding compile times with Zig (i.e. that their vibe-coded 4x compilation speedup PR wasn't accepted), it appears to me as an "interesting" move to switch to a language that probably delivers 4x longer compilations than even vanilla Zig.
I am very sceptical zig actually compiles faster than rust.
I had similar code written in zig and c++ and cold compilation was many times faster in c++ and incremental compilation was instant in c++.
I think the reason most rust projects compile slow is because of excessive usage of dependencies and also the excessive use of metaprogramming in code.
Zig doesn’t have multiple compilation units so it doesn’t parallelize compilation
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
Presumably they aren't falling for their (extremely obvious) "grassroots" marketing, and know, like any good engineer, that LLMs are not the right tool for this.
It's easy to just see Bun as a marketing stunt, as well.
Are you replying to the wrong comment? I clearly quoted which part I were replying to. I didn't attempt to answer the question "why write Bun in Rust when CC itself can be written in Rust."
What I said is that "they know that LLMs are not the right tool for this" is not the answer, as CC is already vibecoded so it'd be very weird to believe you can't vibecode a port of CC.
The actual answer is, of course, the whole discussion is just making a hill out of a mole. Bun is not committed to a Rust rewriting, vibed or not.
I'll be very interested in how this AI port turns out. I am involved in a number of active projects that are being held back by the language / framework is holding back the project, but where a rewrite would be too big of a project to undertake by using only human power.
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Rust is a good choice to let LLMs run without a ton of supervision. In my experience you need to monitor the progress heavily and take ownership of the design of the thing you're building or porting. Test harness is a must. Each iteration should run the test and ensure it doesn't break things in other places.
I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/
I've been targeting Go instead of Rust for a few things. But same deal, I'm not really a Go programmer and it seems to work well enough. I do have a few decades of engineering all sorts of code bases; so I'm not coming at this completely naively.
My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.
The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.
I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.
I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.
I want zig to succeed but given that zig is not yet 1.x I'd imagine a large code base like bun would have difficulties addressing major breaking changes. Also given the fact that bun is using a fork of zig https://x.com/bunjavascript/status/2048427636414923250?s=20
I am also porting TypeScript to Rust. With a different design I managed to make it faster than tsgo port. I've made a lot of progress in the last 4 months but needs more work. Contributions are welcome!
Picking a pre 1.0 language to build your product always seemed like a bad choice to me. Purely on that basis and ignoring the recent drama this seems like a reasonable idea for tech debt pay down to me. Assuming automated conversion can work without making things worse, which is not exactly a given.
React Native is only an application framework. Using a tool with an unstable API a level down the stack seems much worse. Foundations of sand is the phrase that springs to mind.
That seems totally reasonable but I wonder if there was some head butting in non-public channels given Bun is one of the biggest players in Zig and planned to push through a change like that on their own.
And also great reasons for Bun to port themselves elsewhere. If they aren’t allowed to contribute to Zig, there’s very little reason to select Zig moving forward.
Zig is a moving target that has breaking changes in every release (which is fine as they are sub-1.0). But that means that AI tools have been trained on outdated syntax/etc. Zig isn't that common, so there is even less training data to begin with.
Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
Most of my vibe coding is in zig, and it has been my experience that Claude and Codex both keep up with zig changes just fine. Every now and then I catch them writing outdated code that they burn some tokens on, but my experience says your local codebases’s idioms will influence what gets generated enough to stop this from being a problem.
Probably an experiment due to Bun's PRs to Zig being rejected (Zig does not allow AI use). If Rust works well enough, and the alternative is maintaining a fork of Zig, I'd guess they'd go with Rust.
The anti-AI policy had nothing to do with Bun's PRs being rejected. This post[0] by a core zig maintainer explains why the PRs were low quality and subsequently rejected.
Was there even a PR? The post from Bun [1] says they have no plan to upstream it, and that ziggit post says the changes are undesirable. It sounds like there never was anything to reject.
I can't find any evidence that the creators of Zig hold the views GP seems to suggest, but I think your assertion is wrong.
Normal, emotionally stable people do sometimes make decisions about what businesses to patronize based on the political leanings of the business owners. Same thing happens with art appreciation, movie/TV watching, and plenty of other things. Zig might not be a business, but the same rules apply.
You may think that's foolish, and not make your decisions that way, but it's a perfectly valid way to make decisions.
> Normal, emotionally stable people do sometimes make decisions about what businesses to patronize based on the political leanings of the business owners.
Maybe with issues like abortion or racial discrimination, but not tariffs.
The problem with vibe coded re-writes is that you basically sign off on understanding the generated codebase at that point. Any historical knowledge of the codebase is gone.
the rust they've written (so far) is highly unidiomatic (and with a ton of unsafe). I can't speak to the zig part, but it seems plausible to me it is line-by-line, horrendous rust.
Whether or not they can clean it up is an interesting question.
zig can do some things wrt. compiler time compute which sits somewhere in between rust const expr and proc macro usage. This isn't something rust (or most languages) have. So even if we are generous and interpret line by line as expression by expression this isn't fully doable
but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.
Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.
Wouldn’t call myself an expert in either, but I think 2 things stand out far more than anything else:
1. Rust is effectively as strict as can be in terms of ownership. In Zig you can just allocate some memory and then start slinging pointers (or slices) all over. If you’re doing this then you’re presumably doing it for mutability and you don’t strictly know where that pointer ends up once you’ve passed it on.
2. Rust’s metaprogramming is split among a couple different things (e.g. traits, macros), whereas Zig’s is unified (comptime). comptime is (at least advertised as) “just normal Zig code” and Rust macros are a great example of “this doesn’t work at all like the base language”.
#1 boils down to “can the LLM solve the pointer aliasing here?” and #2 is translating between metaprogramming paradigms. Could work but a line-by-line translation is a pipe dream.
Zig doesn't have a borrow checker. It's basically C, if C had been much better designed.
Line-by-line ports to idiomatic Rust are usually not possible because of the borrow checker and Rust's ownership rules. That's the reason the Typescript compiler was ported to Go instead of Rust.
It makes the git history a bit more confusing to follow if you want to see old changes, but I'm sure a simple wrapper to check for the zig equivalent files as well wouldn't be very difficult.
So I can't tell if the linked commit is an actual attempt or just an experiment but it did always strike me as odd to make a JS runtime in Zig when my impression was there were a lot of work-stopping compiler bugs at the time.
I wonder if something like Haxe, a language that was able to transpile to several languages would be the best target for LLMs. They could always generate haxe and then transpile it to whatever language the user wants.
Probably not for an already ongoing project like this but for a greenfield one.
When I first heard that bun was written in zig, I thought that was an odd choice for such a large project, mostly because the language is "unstable" and is still making significant breaking changes.
I would guess dealing with breaking changes is a big motivation for this.
The only Bun shipped product I've used in anger is OpenCode and I regularly run into segfaults on it. I doubt this is the reason for migration but every time it happens, it reminds me the real cost of unsafe code. That being said, Zig is an absolute pleasure to write and I can't wait until it has a real library ecosystem, Rust's greatest boon.
That's completely normal at the first step of the language transformation. Actually it's required if you do a file by file transformation first while wanting to maintain interface compatibility.
I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.
If nothing, it'll be good marketing material targeted at non-technical enterprise executives so that they pressurize their engineering teams in meetings that look people are porting such complicated things from one different language to totally different language then why are we not using AI effectively?!
Both their AI policy and their rejection of Bun's performance PR were level-headed and well-reasoned. And the link seems more like a proof-of-concept than anything else.
It's true corporate sponsors are a big help with language development, but not at the expense of conceptual integrity.
Bun is the largest project written in zig. And it isn't close. Bun is bigger than zig itself. Seems like zig isn't mature enough to handle Bun's needs, so I don't blame them at all for looking for off ramps. Only time will tell if rigidity from the zig team is worth the cost of losing Bun. It might be.
Zig won't be affected by Bun potentially moving to Rust, the language has been growing rapidly and one of the main proposals of Zig is "maintain it with Zig". It's ability to integrate with existing C code bases, as well as be a drop-in build replacement, has widespread use.
In addition, the link in the comment you replied to explains why the PRs Bun opened to Zig would have lowered the quality of the compiler and how Zig has achieved even greater speedups, with more widely applicable features like incremental compilation and the self-hosted backend.
Tell me you've never worked with system languages without telling me you've never worked with system languages (telling claude to "write it in Rust" does not count).
It seems there was an issue where the image API ignored the ICC Profile.(now fixed)
Any developer with experience implementing image formats would almost certainly avoid this mistake. This is a problem that cannot be solved with vibe coding. In this situation, the user is merely a guinea pig for bug fixes.
Having written a JavaScript runtime in Rust in the past - Rust is an excellent choice. Not just due to the development experience, but also for embedders who want to consume the project as a a library (rather than a binary, e.g. node).
Not sure about vibe-coding it. While they aren't using v8, LLMs made it easier to understand v8 quirks and update v8 as they make weird changes every now and then. It couldn't write the runtime without help though.
This feels more like a reaction to Zig's anti-LLM policy than anything. Anthropic would probably like to contribute something back to Zig at some point, but I doubt anyone would ever believe their PRs were not written by Claude.
Exactly, this is a direct response to Zig refusing to accept pull requests from Bun (and Anthropic). That situation forced Bun to maintain a fork of Zig, and it makes sense in the long term that they'd rather port their entire project to Rust.
I've really enjoyed Bun the past year or so, but the acquisition by Anthropic, Bun's codebase and documentation increasingly becoming AI slop, and this impulsive complete rewrite - all of it has ruined it for me and I'm actively moving off of Bun. I don't feel comfortable relying on it any longer.
I was hopeful for this project, and I've reported crashes & bugs in the bundler with the hope that it will stabilize over time, but this is just silly - I'm not going to risk them pulling the rug under me and replacing the runtime with 1 million lines of vibecoded rust.
I can't imagine going from reviewing code in Zig to letting Claude code handle it in Rust. Seems like a lot of change to deal with in a short amount of time. Wonder how much the bun team culture will change? We've been really liking bun so far
I am not a fan of AI but my limited experience with running local small LLM's did show me that rewriting some scripts into a different language worked really well. So my guess is this will just turn out fine.
It's not really shunned - it's the standard solution for async in Rust - but it's not the right solution for every project, especially if you have specific requirements for how your project's computation should be scheduled. I would guess that Bun is one of those projects, especially as it needs to be able to schedule JS async work itself.
The answer is in the next sentence: "Bun owns its event loop and syscalls." They clearly want to manage their use of threads explicitly, which is not _unusual_ for systems programming but probably less common. Note that `rayon` is different from most of these in that it has nothing to do with async Rust - it's a tool for spreading computation over a thread pool, very popular in non-async projects, but it would also go against their goals here.
tokio is great and it's pretty performant, but you pay an allocation for every future unless you do some complex organization of your futures.
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
Edit: and of course I typed future instead of task (aka "spawned future"). Thanks, child commenters below. Much of Deno was built on spawning futures that mapped to promises and doing it as fast as possible. I spent ages writing a future arena to optimize this stuff..
You only allocate on box futures, which are much more rare than naked futures - generally only used where object safety (essentially dyn support) is required. Even then some workarounds exist.
It's an async runtime. The whole async-await flow removes a little bit of scheduling control and adds some forced memory management in order to give you some nicer code in an application case, but if you're trying to build a runtime yourself I think you'd much rather retain control in this case. It's just hard to reason about.
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
You shouldn't have to pull in big complex dependencies to do what should be primitive things. Zig is putting a strong and thought-out effort into getting async & parallelism "right" inside the stdlib. I'm honestly not up to speed with where rust is at with it at the moment, but last time I checked it was a bit of a mess.
Tokio is a general purpose async runtime. Much the same could probably be said for async-std (except IIRC they do have a barebones reactor for you to build your own on). In general, a general-purpose async runtime will do worse for highly specific tasks than a purpose-built one (especially e.g. NUMA).
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
my guess is they want to do AI/O as part of their event loop explicitly, and blocking a thread in a syscall waiting for an IOP (ala std::fs) isn't the vibe.
`tokio`, and Rust `futures` in general, are perfectly fine for typical applications.
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
Async is much harder to work with than sync+threading is. And while threads have more overhead in theory, in practice almost nobody is writing applications at such a scale where that overhead actually matters. So I don't blame them for eschewing async, there's likely no benefit for the project in it.
April 26th - Bun announces they used AI to fork Zig so they could make an optimization for a 4x improvement
April 27th - Zig contributor mlugg clarifies why the specific optimizations Bun did were ill advised and wouldn't have been accepted in Zig, regardless of AI use [1]
May 4 - Bun is looking into Rust as an alternative.
This, to me, seems like total whiplash. Has anyone at Bun made a statement on why they're making such dramatic changes? It seems like the lesson to internalize from mlugg is not "switch to Rust"
I would assume that Zig was a risky choice to start with, and Rust was always lurking as a sensible option behind the corner. This probably just broke the camel's back.
Interesting. When I thought of Zig, I thought of Bun. In my mind it was the flagship application for that language. Is there another? I wonder how the Zig team feels about this. To me it seems like Rust has definitively won now.
Yeah, it's not clear. Especially the rise of LLMs is going to chip away Zig's strong points (simplicity at the cost of lesser safety) as time goes on. Which might be a part of why they're so stressed about it.
We can even use all PLs in a single project. Starting question should go with something like "which part will we code rather in brainfuck and which in whitespace?"
This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable
Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.
For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.
[0] - https://github.com/cloudflare/vinext
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
If people get worked up about experimentation, that's their problem, not yours.
I don't think the tone was the problem.
[there was some sarcasm there, BTW, if anyone has a faulty detector that didn't pick up on it]
Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.
Do you think the same about bitcoin? Where do you draw the line as to what programs are allowed to be written?
While the concerns many have about Bun's potential future direction are valid IMO, of the posts on this thread the one you are criticising is one of the more constructive.
Recently Bun's latest version had memory leaks which crashed production code from my understanding and their attitude[0] of saying OSS will have no human contribution allowed, now doing these ports of zig to rust, going back for years what the decision making of using zig was and this code basically being vibed as there is no way that they are reviewing the code while being VC funded/bought by anthropic.
These are all genuine issues which cause hate. You can say people are hating because people rely on it but the true thing is that also seems like a bait and switch and that people switched from node.js to bun (maybe even being locked inside bun), only for them to do these highly questionable decisions which is the reason why people are starting to hate on bun.
Atleast that's my interpretation right now reading this whole thread.
[0]:https://x.com/jarredsumner/status/2048434628248359284: "I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026."
- Jarred Sumner
e.g. `Box::leak(Box::new( ... ))`
Who is to say that it’s wrong?
Bun raised millions of dollars and was acquired by a commercial entity which bragged in the same blog post of reaching $1B. They’re not a guy with an eyepatch and a tin can out on the street.
Open-source developers should be compensated, but they don’t have to be. You can’t reasonably offer your work for free then complain someone isn’t paying you. If you want to be paid, charge for it.
Signed: A long time open-source developer who has dedicated years of full-time work to useful projects without compensation or raising VC money or being acquired.
We are all software engineers on here (or at least many of us are), we all know how project management and prioritisation works right? We can't work on everything all at once.
That is not what the question is about, which you’ll see if you engage with it properly in good faith. There is a single question in the comment (indicated, as one does in English, by a question mark):
> How do you feel about all the constant concerns being raised about the quality of the project lately?
Everything else is context and opinion to explain the question.
At some point it need to be made clear; it's not a legal obligation, but a reputational challenge.
What aspect do you think dominates?
For what it's worth, in my last experience with Bun[0] I ran into a couple of bugs where it seemed Rust could have helped, e.g. using Bun.write
[0]: https://mastrojs.github.io/blog/2025-10-29-what-struggled-wi...)
I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.
Or take on an additional/related feature (like Redis grepping over the new array data types). Because you can be relatively sure the borders are stable and you can limit the surface/scope.
Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.
I don't understand why this mentality is so common. Zig and Rust are both fine languages with markedly different design goals and they can coexist.
I hope you get the code elegant and not only maintainable but future friendly and performant.
I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.
would love to read a postmortem
While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.
Props for the effort man, but people have already picked up on Zig-to-Rust transition.
Poor Zig folks ...
Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.
But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.
So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.
Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.
If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.
> Showing 1,808 changed files with 790,916 additions and 151 deletions.
Just looking at the git diff [0].
I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?
If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?
[0] https://github.com/oven-sh/bun/compare/main...claude/phase-a...
[1] https://github.com/oven-sh/bun/blob/dacc59c62a8f93eabe6d9998...
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.
It’s their repo, let them do what they want lol
It could get even worse if they get Second System Syndrome[1] and try to add features as they rewrite it. Considering Bun's rapid development cycle, this seems likely.
[0] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[1] https://en.wikipedia.org/wiki/Second-system_effect
A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.
It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.
The majority of the community feels this way which says something. The author's reaction is to publicly display being upset and dismissive of the communities reaction. That is just making it worse.
When you work on a project this big, more care is needed. The commit was an innocent mistake. The blunder is blowing off the communities response as overblown which it would be had the commit been tagged experimental. But it wasn't. And the author did themselves no favor blowing it off.
If the author was smart, their reply would simply have been:
Hello, To clarify, this is an experimental branch only. There are no plans to port, only experiment. I will tag the repo as such to ensure people understand its intention and avoid future misunderstandings.
Nothing difficult to understand here.
You may even be an OK programmer, but IF YOU AREN'T ABLE TO DO THE WORK I DON'T WANT TO USE IT.
Not worth your time? Not worth my time.
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
It's probably a bit of both.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Bold of you to assume they have the expertise.
Submitting patches is joining forces and helping out.
--------
[1] And align with the project's direction. This part is of course much more subjective so could very easily be an honest misunderstanding of the situation.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
I love Rust, but you couldn't pick a language with slower compile times... XD
Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.
On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.
What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.
I guess it's all relative.
I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.
I'd put it at a bit better than Haskell, but honestly not by much.
I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.
As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).
Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)
The patch would have been rejected either way because it was out of date and conflicted with other work going on.
[1] https://kristoff.it/blog/contributor-poker-and-ai/
LLMs promote a decoupling of mental models and the actual codebase.
As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.
Then it's sufficient to know assembly, but not necessary.
This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".
- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler
- the mental model of what is being generated and how much the contributor understands and owns the generated code
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.
So it is not, by your own admission, "exactly, literally the same".
Vide-coders often don't read, let alone understand, the code they send for PRs.
(Though I don't know if this particular patch series would get accepted on its own merits.)
split into a bunch of much smaller changes?
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.
There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.
Couldn't you say exactly the same about bun?
I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.
Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.
I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.
Both appear to be[1][2]. FreeBSD doesn't have a formal policy yet, but they appear to be leaning towards admitting some degree of LLM contribution.
[1]: https://docs.kernel.org/process/coding-assistants.html
[2]: https://forums.freebsd.org/threads/will-freebsd-adopt-a-no-a...
PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
You can be against a particular technology without being "anti-technology".
See DRM/surveillance/bad self driving implementations.
Just because a thing exists doesn’t mean you have to use it for everything. You don’t use asbestos blanket? Why are you so against asbestos?
No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.
So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.
Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.
[1] https://news.ycombinator.com/item?id=47997947
The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.
People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.
Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.
The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.
Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.
The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.
I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.
The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.
As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.
Lock the syntax/api together for a couple of years. Allow AI code in Zag.
Review after a few years, see which is better.
https://xkcd.com/286/
* https://xkcd.com/647/
* https://xkcd.com/1477/
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
They didn't.
And will Rust team accept their vibe coded patches?
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
[0] https://github.com/malisper/pgrust
That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
This is very far from Anthropic's migration plans.
My benchmark is basically, "are you letting the AI drive."
In this case, an AI appears to have written the migration guide...
And then that leaks outside their social and age groups, because other people hear the incorrect usage, get confused, and incorporate that confusion into their own use of the term.
with superpowers, i see a lot of specs -> impl plan -> execute plan
Inventing a term doesn't give you exclusive rights to provide the definition.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
[0]: https://github.com/orgs/connectrpc/discussions/7#discussionc...
[1]: https://connectrpc.com/
[2]: https://github.com/anthropics/buffa
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
I think this is true regardless of what language you’re using.
I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.
Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
Here, I just did a quick test with claude.
1. "make a simple tcp echo server that uses rust"
compiles and runs - took a few seconds to generate.
2. "make a simple tcp echo server that uses zig"
result: compile error, took literal minutes of spinning and thinking to generate
response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."
/opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it"); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. "make a simple tcp echo server that uses zig 0.16"
result: compile error:
zig build-exe main.zig main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal' const io = init.io; ^~
4. "make a simple tcp echo server that uses zig 0.15"
result: compile error
zig build-exe main.zig /nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.
1. the language and stdlib are written by people who know what they're doing 2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about
I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.
Virtually all crates are still at version 0.x and introduce constant breaking changes: [https://00f.net/2025/10/17/state-of-the-rust-ecosystem/](https://00f.net/2025/10/17/state-of-the-rust-ecosystem/)
If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.
LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.
What you are talking about used to be a pain point, but is now pretty much gone.
Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.
Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.
Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.
I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that work.
> The regular pull requests for bun are wild too: https://github.com/oven-sh/bun/pulls?q=is%3Apr+
> Most are created autonomously by @robobun, checked for duplicates with a GitHub action (powered by Claude), reviewed by @coderabbitai and @claude. Meanwhile the CI is broken and @robobun finally closes a portion of its own PRs because they duplicate other PRs it has written. (Merging into main is still done by a human.)
How is it an incorrect interpretation? Jared is indeed pitching/suggesting/predicting that human contribution will not be allowed in the near future, i.e. banned.
The person upthread should have said "predicting".
Among them:
- much easier to iterate on (due to the language being simpler and compilation much faster)
- native C/C++ interops (Zig can compile C and C++ and mix it with Zig) which is crucial for a node-replacement runtime that runs an open source JS engine
- fewer dependencies and trivial static linking
I guess that now that they've been acquired by Anthropic there's this combination of having both in-house Rust talent, AI which does better on Rust, and the funding and resources necessary to undertake such a migration.
I'm struggling to figure out how to even start interrogating this notion. What does this mean?
I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.
Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.
It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS
So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.
something JS-adjacent could certainly be more known than an obscure language but are that many people using drop-in node replacements?
But I can’t reconcile the reasoning about “strong, thorough compiler” with the fact that LLMs are also fantastic at Ruby.
They also write really great posix shell (including very sophisticated scripts) and python.
Something more subtle is going on.
Has anyone made any cross language benchmarks for LLMs? I wonder if rust's conceptual complexity makes it harder for LLMs to write? If all you care about is working software, which language is best for LLMs? Python, because there's more example code? Go or Java, because they're simpler languages? Ruby because its terse? Rust because of the compiler? I'd love to see a comparison!
Sorry if I’m being pedantic, but I’m not aware of Bun having made any statements about AI assisted coding before.
I believe now we have all but we fail at choosing.
It doesn’t look like that at all. Do you think that all use of AI is vibe coding?
https://github.com/oven-sh/bun/compare/claude/phase-a-port
This single commit is 65k lines of additions
https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...
There's a decent article by Simon Willison that talks about this: https://simonwillison.net/2025/Mar/19/vibe-coding/
> I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
But pointing your AI at an entire codebase to transpile pretty much entirely by itself? Yeah vibe coding is a fitting term.
Even if you wrote it a small essay on how to Rust. That improves the situation but doesn't change the core autonomy/hope of the task.
> (programming, neologism) A method of programming in which a developer generates code by repeatedly prompting a large language model.
https://en.wiktionary.org/wiki/vibe_coding
As much as I find the word "vibe" generally annoying (in all contexts), I actually really like "vibe coding" as "LLM did everything and I didn't even look at it". It's a succinct, useful way to describe that mode of doing things. Diluting it down to "LLM-assisted coding" makes it useless.
It sort of surprises me how uptight people are getting about a term that was mentioned on X last year and has since been tossed around to loosely imply that a machine did between zero and all of the work. Just because it doesn't match exactly does not mean it's useless, it maps to a concept, if the details are important and ambiguous, then elaborate.
You're absolutely right.
"+27,939Lines changed: 27939 additions & 0 deletions"
of new rust code
This is obviously very different from that, but the way the commit looks doesn't make it so.
Why? Do you think large changes not made by LLMs are also reviewed line by line?
I think the most commonly-accepted definition of "vibe coding" is when you "forget that the (generated) code even exists"[0]. So vibe-ness entirely hinges upon whether you're manually reviewing. If you make/prompt changes based on what you observe in the generated code (rather than only based on runtime behavior), then you're not "vibe coding".
I think the other things you mentioned are orthogonal to vibe-ness.
[0]: https://en.wikipedia.org/wiki/Vibe_coding#Definition
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).
Especially for memory management the right and wrong abstractions in Rust can lead to a factor of 5 or 10 extra amount of difficulty. The right memory management abstraction and your code can be a straight line port (or even cleaner!), the wrong one and you're going to just be spending a lot of tokens to have a machine spin around in circles trying to untie itself
GC'd languages don't have this problem, though obviously you can still generate stupid amount of pain for yourself by doing something wrong
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
Should you use the LLM to do the thing directly, or use the LLM to implement a tool that does the thing?
I tend to reach for the latter, it’s easier to reason about.
But none of these properties are what let you perform a successful port. The port is going to rely entirely on oracle testing.
Have the best of both worlds.
But I'm excited for the (I think inevitable) stage where the shoggoth starts to reach outside those constraints -- rewriting, patching, renaming, rebuilding libraries, DLLs, binaries -- and we move into a regime where the libraries dissolve, the application floats on top of the shifting sands of an ever more efficient, secure, unified and totally inhuman technology stack.
Obviously this is a horrifying idea in some ways (interpretability, security etc), but it's also not obvious to me that it can't work, especially if there are dedicated, centralized efforts to do this. it's also not clear that interpretability is necessarily mutually exclusive with full slopification/machine rewrite of decades of foundational, incremental development
[0]: https://github.com/oven-sh/bun/compare/claude/phase-a-port
which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.
anyways that means segfaults likely would stay segfaults in the initial transpilled version
I had similar code written in zig and c++ and cold compilation was many times faster in c++ and incremental compilation was instant in c++.
I think the reason most rust projects compile slow is because of excessive usage of dependencies and also the excessive use of metaprogramming in code.
Zig doesn’t have multiple compilation units so it doesn’t parallelize compilation
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
It's easy to just see Bun as a marketing stunt, as well.
Claude Code itself is already heavily written by LLMs[0], so I'm not sure what's "this" here. You mean LLMs are okay for writing code but not porting?
[0]: No, it's not just marketing. The codebase was leaked and anyone who glanced at it would realize the claim is likely true.
What I said is that "they know that LLMs are not the right tool for this" is not the answer, as CC is already vibecoded so it'd be very weird to believe you can't vibecode a port of CC.
The actual answer is, of course, the whole discussion is just making a hill out of a mole. Bun is not committed to a Rust rewriting, vibed or not.
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/
Happy to share my learnings on this
My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.
The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.
I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.
I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.
Dunning Kruger effect. At least you admit it.
> Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Ya think?
https://tsz.dev
Such as React Native? :D
Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
[1] https://x.com/bunjavascript/status/2048428104893542781
Normal, emotionally stable people do sometimes make decisions about what businesses to patronize based on the political leanings of the business owners. Same thing happens with art appreciation, movie/TV watching, and plenty of other things. Zig might not be a business, but the same rules apply.
You may think that's foolish, and not make your decisions that way, but it's a perfectly valid way to make decisions.
Maybe with issues like abortion or racial discrimination, but not tariffs.
Whether or not they can clean it up is an interesting question.
but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.
Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.
(would teach me a little about Zig, about which i know 0)
#1 boils down to “can the LLM solve the pointer aliasing here?” and #2 is translating between metaprogramming paradigms. Could work but a line-by-line translation is a pipe dream.
Line-by-line ports to idiomatic Rust are usually not possible because of the borrow checker and Rust's ownership rules. That's the reason the Typescript compiler was ported to Go instead of Rust.
I think people here are reading too much into it.
I would guess dealing with breaking changes is a big motivation for this.
https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...
that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.
I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.
I get nodejs not found error when running opencode command in terminal. I installed it via bun too.
What is the most interesting here for me is:
- a big, clear outcome and acceptance criteria, vibe coding project on
- a public, working, high performance, full featured, production codebase by
- the leading LLM model maker known for the strongest coding ability
A good example no matter if it successes or not.
As a fan of the language, I hope it leads to some reflection on things that might need to change moving forward.
Both their AI policy and their rejection of Bun's performance PR were level-headed and well-reasoned. And the link seems more like a proof-of-concept than anything else.
It's true corporate sponsors are a big help with language development, but not at the expense of conceptual integrity.
[1] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
In addition, the link in the comment you replied to explains why the PRs Bun opened to Zig would have lowered the quality of the compiler and how Zig has achieved even greater speedups, with more widely applicable features like incremental compilation and the self-hosted backend.
Will everything eventually be rewritten in Rust and we finally achieve utopia?
OK I'm sorry, I'll see myself out.
It seems there was an issue where the image API ignored the ICC Profile.(now fixed) Any developer with experience implementing image formats would almost certainly avoid this mistake. This is a problem that cannot be solved with vibe coding. In this situation, the user is merely a guinea pig for bug fixes.
Sounds like responsible open source software development to me. That's what pre-releases are for.
Haha, is it really okay not to retract that that the official account previously posted a caricature criticizing Rust?
On nodejs: `tokei src`: 98333 LOC C++ Code
On bun: `tokei src` 573572 LOC Zig Code
On deno: `tokei libs cli runtime` 289573 LOC Rust Code
This seems wrong though so would be appreciated if someone who knows the structure of these projects can correct me on the folder names.
Doing `tokei lib src test deps` gives more than 5M loc. but not sure if that is fair
Trying to run it as a replacement for node in persistent backend/api scenarios is just plain broken.
RSS grows unbounded under Bun: https://discord.com/channels/876711213126520882/148058965798...
Not sure about vibe-coding it. While they aren't using v8, LLMs made it easier to understand v8 quirks and update v8 as they make weird changes every now and then. It couldn't write the runtime without help though.
For those curious: https://github.com/alshdavid/ion
I've really enjoyed Bun the past year or so, but the acquisition by Anthropic, Bun's codebase and documentation increasingly becoming AI slop, and this impulsive complete rewrite - all of it has ruined it for me and I'm actively moving off of Bun. I don't feel comfortable relying on it any longer.
This makes me respect Zig team's stance more, that it's a technical decision more than an ideological one.
I was hopeful for this project, and I've reported crashes & bugs in the bundler with the hope that it will stabilize over time, but this is just silly - I'm not going to risk them pulling the rug under me and replacing the runtime with 1 million lines of vibecoded rust.
If they did, I guess they would rewrite deno in C++
As an aside, I've been bitten by Zig's breaking changes on my own projects as well. It's taken the shine off of Zig and I'm looking at alternatives.
https://bun.com/blog/bun-joins-anthropic
"I got obsessed with Claude Code"
So the bad, bad Zig that opposes the clanker mania has to be punished, even if top comments deny it.
Anthropic is one of the most evil companies in existence today. Whenever someone produces something, they steal it.
Everyone wants to be a Rustee these days.
I'm not a rust dev but even I kind of notice that tokio is kind of shunned in most projects. Why is that? Is it just bad or what?
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
Edit: and of course I typed future instead of task (aka "spawned future"). Thanks, child commenters below. Much of Deno was built on spawning futures that mapped to promises and doing it as fast as possible. I spent ages writing a future arena to optimize this stuff..
Edit: and tasks.
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
However, there are reasons why you might not want to use it:
- You don't need async at all
- You want to own the async execution polling completely
- You want some alternative futures executor like io uring (even though tokio-uring is a thing)
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
Company A buys company B. A's management decrees the henceforth B's aqcuihired team must comply with company A's standards.
Second system effect kicks in. Bugs multiply.
Half of original company B devs leave.
I'm investigating whether future projects should revert to using Deno.
April 27th - Zig contributor mlugg clarifies why the specific optimizations Bun did were ill advised and wouldn't have been accepted in Zig, regardless of AI use [1]
May 4 - Bun is looking into Rust as an alternative.
This, to me, seems like total whiplash. Has anyone at Bun made a statement on why they're making such dramatic changes? It seems like the lesson to internalize from mlugg is not "switch to Rust"
[1] https://lobste.rs/s/ifcyr1/contributor_poker_zig_s_ai_ban#c_...
Hm does that actually work?
Edit: in a way that can be verified, and not the AI tool saying it did
Problem is fanboys like YOU.