> I work on Bun and this is my branch
>
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
>
> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
Jarred 3 hours ago [-]
cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.
gobdovan 1 hours ago [-]
If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.
From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?
Erm, what's the problem with ngspice? There appears to be people working on it and it even recently got integrated into KiCad.
That sounds like a perfectly functional project, to me.
eqvinox 17 minutes ago [-]
+1, a project presenting at FOSDEM certainly does not need a "revive".
inglor 3 hours ago [-]
Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.
ignoramous 3 hours ago [-]
how long does it take to compile?
@jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.
sysguest 3 hours ago [-]
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
haven't used zig...(only used rust)
but zig doesn't solve those problems?
nyrikki 2 hours ago [-]
Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.
I am of the opinion that it is horses for courses and not a universal better proposition.
Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…
While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.
1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.
2) defer[0] allows you to collocate the the freeing of resources with code.
That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.
I was working on some eBPF code in C and did really miss zig.
For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.
Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.
I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.
fao_ 1 hours ago [-]
unsafe abstracted is still unsafe!
IshKebab 1 hours ago [-]
It's not as simple as that. All software is abstraction and with any software if you go deep enough you'll find unsafe code.
E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.
If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.
It isn't as binary as you think.
paulddraper 51 minutes ago [-]
If you don’t see any difference between those two, I’m really not sure what to say.
zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.
josephg 3 hours ago [-]
Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.
It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.
dnautics 2 hours ago [-]
It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.
I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.
minimaxir 28 minutes ago [-]
Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."
logicprog 4 hours ago [-]
Looks like he did the maintainability performance and test suite checks and made his decision :)
jazzypants 4 hours ago [-]
Honestly, I fully support the rewrite to Rust, but he should have just owned this from the start. I'm sure he knew in the back of his mind how dedicated he was to that branch as he had already spent the equivalent of thousands of dollars in tokens by that point.
nvme0n1p1 3 hours ago [-]
Bun was VC funded and acquired by Anthropic. He's spending company money, not his own money.
jazzypants 3 hours ago [-]
That's why I said "the equivalent of". Additionally, time and cognitive effort are not free. The work spent on this branch was work that was not spent on other branches. Does that make sense?
nvme0n1p1 3 hours ago [-]
6 days is also nothing when you're doing R&D on your company's dime. He could have spent a month trying a dozen different things and thrown away all the code at the end. As long as he ends that month with a clear picture of where to steer the company over the next 5 years, it's time well spent.
nozzlegear 2 hours ago [-]
Had my former employers been so lenient with how I spend company time, I might still be an office worker instead of self-employed!
throw1234567891 3 hours ago [-]
Not even the company is spending money. It’s their employee working on a rework of the code owned by the company that owns the infrastructure on which the rework is done. And that company is still yet to turn profit. This work is subsidised by everyone who pays for Claude.
skybrian 3 hours ago [-]
Announcing the decision a week earlier wouldn't help anyone. Maybe he expected it to work (though he didn't say that), but there's no reason to make a final call before seeing that it did work.
jazzypants 3 hours ago [-]
Fair enough. I didn't say anything about a "final call". It just feels like there is a middle ground between that and telling people they are overreacting.
fragmede 3 hours ago [-]
Yeah but with no guarantee that it was going to work, why should he have?
jazzypants 3 hours ago [-]
Yeah, but he obviously had enough confidence in this project to keep the agents working at it, didn't he? Given infinite time and money, if you prompt an LLM about something enough times, it will eventually work.
Insert something about monkeys, typewriters, and Shakespeare here.
furyofantares 2 hours ago [-]
He was 2 days into a project that ended up taking 6. You're being extremely unreasonable.
throw1234567891 3 hours ago [-]
But you didn’t have to sit and type. Assuming that you look at what it did, why not?
raincole 2 hours ago [-]
Yeah, that means it's an extremely successful experiment so far.
lioeters 1 hours ago [-]
Also a few days before that:
> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.
We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.
Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.
tempaccount5050 1 hours ago [-]
Software companies have been about automating human labor since the invention of computers. It's the whole damn point. Why do you think finance used to be (sometimes still is) the head of the IT dept? Because we automated accounting away. Then typists. Then secretaries. Then drafting. Etc etc.
sdevonoes 37 minutes ago [-]
There are software components out there that are the backbone of our industry, and they are not governed by multibillion dollar companies. Linux, postgres, HTTP, TCP/IP, qemu,…
It’s not that anthropic/google/openai/etc are unavoidable
tomnipotent 30 minutes ago [-]
> they are not governed by multibillion dollar companies
Every tech you mentioned is absolutely governed by multibillion dollar companies. Something like 75-85% of OSS code is contributed by employees doing their day job. Most Linux and Postgres contributions come from those same employees. HTTP and TCP/IP are managed by standard bodies and industry working groups that, you guessed it, are governed by multibillion dollar companies. Red Hat and IBM are responsible for 40-60% of contributions to Qemu.
raj1298 16 minutes ago [-]
The usual model for OSS projects is that initially they are written for free. Then an inner circle forms and exploits the second generation of idealists who write entire large features without ever getting the same rights.
Some of the inner circle move to corporations to increase their power and are joined by corporate developers (sometimes their bosses) to take over the project.
A lot of corporate OSS development are entirely unnecessary rewrites or simple things like release management. So I'd put the number of useful code by employees much lower.
But governed, hell yeah, I agree. The corporations crack the whip and oppress real contributors.
wiseowise 42 minutes ago [-]
> It's the whole damn point.
Believe it or not, for some of us it’s not “the whole damn point”.
tempaccount5050 23 minutes ago [-]
Whether or not you want to admit that is up to you. If you're selling automation or efficiency gains, you're removing human labor.
grim_io 27 minutes ago [-]
No one is taking away programming as a hobby from you :)
claude_delusion 1 hours ago [-]
We know, we know. Has this talking point been added to the astroturfing guide?
"ok guys, that's enough progress since now it's my job at stake, we can stop."
3 hours ago [-]
des429 3 hours ago [-]
What's your point
righthand 3 hours ago [-]
[flagged]
nerdsniper 2 hours ago [-]
The quote doesn’t provide warrant for this claim. The developer did a great job investigating the applicability of a new tool and it appears the investigation yielded fruit.
Your kind of negativity is pathological.
righthand 2 hours ago [-]
[flagged]
fastball 2 hours ago [-]
What are you even talking about?
esquivalience 3 hours ago [-]
I totally disagree with this! I think it's very important for experts to be able to adapt to their opinions based on evidence.
righthand 2 hours ago [-]
Sure but if you’re an expert you’re probably finishing your project and collecting results, not sprinting to an online thread to evangelize for Llms with partial results. That sounds amateur to me.
staticassertion 2 hours ago [-]
He's tweeting his experiences. Calling this "sprinting" and "evangelizing" is just rhetoric. Posting about a project you're working on is hardly amateurish.
righthand 2 hours ago [-]
[flagged]
staticassertion 2 hours ago [-]
[flagged]
supern0va 1 hours ago [-]
Ugh, I really find this sort of thing frustrating. I like people developing, and testing, and ideating, and exploring in public!
This is one of my problems with academia: people only sharing results when they're positive and complete. I want to hear about what people tried that didn't work, and see the string of failures. People are already inclined to avoid sharing their work out of concern that they'll be judged--let's not encourage that behavior, please.
ianbutler 2 hours ago [-]
What have you built?
righthand 2 hours ago [-]
You first, since you’re so well versed on this topic and slid in with a clever question. This thread isn’t about having built stuff. It’s about pointing out that some people working on projects may not actually be experts.
ianbutler 2 hours ago [-]
Im not the one leveraging a super uninformed critique, but I will take that as my answer
And to have an opinion on that you yourself need to be an expert or at least experienced. Otherwise you’re kind of just not capable of judging
4aksh19 3 hours ago [-]
"No one has the intention of building a wall" - Walter Ulbricht, chairman of the central committee, a couple of months before the Berlin Wall was built.
The AI companies and their associates are beginning to surpass that level of denials and lies.
christopherwxyz 2 hours ago [-]
It’s disrespectful to immediately jump to adversarial conclusions from a simple desire to refactor and poor netiquette.
yrjrjjrjjtjjr 2 hours ago [-]
The right to be suspicious of the motives of powerful people is infinitely more important than protecting their feelings from being hurt by suspicion.
christopherwxyz 57 minutes ago [-]
Protecting software creators, engineers, builders, and their work, regardless of their tools, is infinitely more important. Full stop.
erkat 1 hours ago [-]
If experienced (in open source and corporate politics) developers would bet on Polymarket if the rewrite is going to be ultimately merged, which side would you bet on?
What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.
I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.
christopherwxyz 56 minutes ago [-]
I don't think most serious developers have time to watch prediction markets.
Cthulhu_ 2 hours ago [-]
Not to mention invoking a major historical event, appeal to emotion move.
dandellion 2 hours ago [-]
Four days ago there was no intention to rewrite, now it's a simple desire to refactor. It's not adversarial conclusion, it's pointing out the clear hypocrisy.
johncolanduoni 2 hours ago [-]
Running an experiment, the experiment being more successful than you thought, and then deciding to put more effort into a bigger experiment is not hypocrisy. It’s engineering. If you think some of the objective facts they’re putting out (like test coverage and performance) are lies, go and prove it instead of appealing to emotion.
christopherwxyz 54 minutes ago [-]
Being able to change your mind is a excellent exercise in free will.
dzonga 2 hours ago [-]
you know this whole exercise is both a marketing exercise and a way to make noise.
would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?
they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.
zig adopting a strict 'no LLM' policy affects the LLM vendors.
aleksiy123 52 minutes ago [-]
It’s also just a useful exercise in general, especially for getting feedback for models and harnesses.
I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.
Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.
johncolanduoni 2 hours ago [-]
I don’t think the Zig project adopting a strict ‘no LLM’ policy affects the LLM vendors at all. How many developers are working on the Zig project itself that will (maybe) now not buy a Claude subscription? I can buy that this is a marketing stunt, but nobody at the top cares if a relatively small open source project doesn’t allow AI contributions.
sdevonoes 34 minutes ago [-]
Exactly. Always asks “who benefits from this?” . The answer in this case is: AI vendors, not us.
tracerbulletx 1 hours ago [-]
If you think Claude needs manufactured hype at this point to sell it you're delusional.
If you think they can survive without hype, you are the naive one
3 hours ago [-]
jwpapi 7 minutes ago [-]
Completely unbased, but I don’t want to have to do anything with bun anymore. It’s just a gut feeling, but I don’t trust them and support them.
They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)
And now like a winey baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.
It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.
mohsen1 3 hours ago [-]
Very impressive that they could do this so quickly because I have been on a similar project (porting TypeScript to Rust) for 5 months. But I guess I don't have access to Mythos and unlimited tokens. I'm also close to 100% pass rate. 99.6% at the time of writing.
Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.
cornholio 2 hours ago [-]
> Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
I question this. Yes, strong enforcement of invariants at compile time helps the LLM generate functional code since it gets rapid feedback and retraces as opposed to generating buggy code that fails at runtime in edge cases.
On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code. If the initial architecture is bad or lacking, growing the code base incrementally as LLMs typically do will tend towards spaghettification. So I fear a program that compiles and even runs ok, but no longer human readable or maintainable.
theptip 1 hours ago [-]
> Rust is a complex language prone to refactoring avalanches
This may be so, but LLMs are great at slogging through such tedious repercussions.
I would say if the language prevents sloppy intermediate states, that actually makes it more amenable to AI; if you just half-ass a refactor into a conceptually inconsistent state, it’s possible for bad tests to fail to catch it in Python, say. But if many such incomplete states are just forbidden, then the compiler errors provide a clean objective function that the LLM can keep iterating on.
carllerche 2 hours ago [-]
> On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code.
Are you saying this out of personal experience or just hypothesizing? I am working on a large, complex rust project with Claude Code and do not experience this at all.
gobdovan 2 hours ago [-]
It can happen like this:
- write sleek operator-overloading-based code for simple mathematical operations on your custom pet algebra
- decide that you want to turn it into an autograd library [0]
- realise that you now need either `RefCell` for interior mutability, or arenas to save the computation graph and local gradients
- realise that `RefCell` puts borrow checks on the runtime path and can panic if you get aliasing wrong
- realise that plain arenas cannot use your sleek operator-overloaded expressions, since `a + b` has no access to the arena, so you need to rewrite them as `tape.sum(node_a, node_b)`
- cry
This was my introduction to why you kinda need to know what you will end up building with Rust, or suffer the cascade refactors. In Python, for example, this issue mostly wouldn't happen, since objects are already reference-like, so the tape/graph can stay implicit and you just chug along.
I still prefer Rust, just that these refactor cascades will happen. But they are mechanically doable, because you just need to 'break' one type, and let an LLM correct the fallout errors surfaced by the compiler till you reach a consistent new ownership model, and I suppose this is common enough that LLM saw it being done hundreds of times, haha.
It's very easy to just instruct the LLM to build using isolated crates, to maintain boundaries, focus on "ports and adapters", etc, and not run into this - in my experience.
I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.
mohsen1 2 hours ago [-]
From the languages that I know, Rust is the only language that I can look at a multi-threaded code and understand it. This stuff being checked by the compiler is a huge advantage
TheMrZZ 1 hours ago [-]
I only used Rust for fun maths projects crunching billions of numbers (else python is easier for me), but I have to say rayon is the most amazing multi-processing experience I've ever had!
nm980 2 hours ago [-]
> I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.
This rewrite is >750k lines of Rust
staticassertion 2 hours ago [-]
I don't see any reason why the approach wouldn't hold just fine, if not better, as the codebase scaled. Indeed this appears to be exactly what the author has done, they mention that they made heavy use of crates.
mohamedkoubaa 2 hours ago [-]
[flagged]
malisper 1 hours ago [-]
Same but for multi-threaded Postgres[0]. 96% pg regression tests pass after 1 month and 823K LOC. 8 Codex accounts at $200/mo is what i could use up with no Mythos
I've also seen the benefits of Rust for this too. And making the bet that my pg experience will help me make good design choices around many of the things people have been having trouble with in pg for a long time[1]. Excited to see AI make it more possible to improve complex pieces of software than has historically been practical.
Very cool! If you have extra tokens laying around ask the agent try to break things and open GitHub issues. This is what I do for tsz and beyond conformance test I can see it finding very good bugs.
mixtureoftakes 38 minutes ago [-]
wow!
curious about your workflow for running all these accounts. different harnesses in parallel? manually switching in codex? 5.5pro only?
what works for you?
kayson 3 hours ago [-]
When Microsoft rewrote it in go, there was a comment from one of the leads that they chose it over rust because of the similarity in paradigms (garbage collection, etc), and that using rust would've been more difficult, requiring a lot of "hoop jumping". Now that you've done it... Thoughts?
mohsen1 3 hours ago [-]
Yes indeed. More than 1 million lines of code (including tests) is jumping lots of hoops but with LLMs it's not as painful so you can just ask it to do the hard things.
Example of a Claude Code session after 2 hours of "Crunching" that came out without results
https://github.com/mohsen1/tsz/pull/4868 (Edit I force pushed to PR to solve the problem, you can see the initial refuse message in the initial version of PR description)
Funny thing is, the last percent of the test have been so hard to work on that Opus 4.7 routinely bails and says "it's too involved or complicated" so I had to add prompts specifically asking it not to bail.
baq 3 hours ago [-]
You should try GPT, I’d be really interested to hear if it works better. (Exclusively using GPT for systems work at $DAYJOB, but compare with opus every couple weeks and GPT consistently gives me better results)
X-Istence 2 hours ago [-]
I've been comparing Claude vs Codex using GPT and Claude consistently is better than GPT about reasoning, about writing code, and using the tools as appropriate.
GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.
GPT also left me with broken tests/code that I had to iterate on manually, Claude is much better about reasoning through code. Primarily Python.
2 hours ago [-]
mohsen1 3 hours ago [-]
OpenAI gave me that 10x boost and used it all already for this week. I'm guessing the last 50 tests is only doable by GPT 5.5 xhigh
odie5533 2 hours ago [-]
Do you have any write ups on your workflow with Claude and github dev?
mebcitto 3 hours ago [-]
That might be opus 4.7 behaviour because I also get that all the time in the past few weeks. Also complex code base, but likely an order of magnitude simpler than yours.
3 hours ago [-]
calmoo 3 hours ago [-]
Is GC useful for a static type checker? Or did they make a new runtime?
aardvark179 4 minutes ago [-]
The point is that having a GC will affect your data structure and algorithm design, so it’s easier to automatically transform JS or TS to Go than to rust because you’re mostly reducing things down to one problem (translation) rather than multiple intertwined problems.
adambrod 3 hours ago [-]
They mentioned that they wanted to port their compiler over to retain existing behavior (vs a re-write) and Rust has a hard time with their cyclic data structures.
bicepjai 2 hours ago [-]
Rust is amazing, but the way I want to build Rust software breaks down on large projects with LLMs. Maintaining clean boundaries or even just establishing them stops being a flow state and turns into painful reviews that push me into procrastination mode.
girvo 1 hours ago [-]
I’ve struggled to get Opus to not write the weirdest possible Rust, ignoring all idioms and so on. Any tips?
Ciantic 3 hours ago [-]
Wow, amazing work.
Pretty impressive that it is faster than the Go version already.
mohsen1 3 hours ago [-]
Thank you!
It's much faster in single file benchmarks (3 to 5x)
I have optimizations planned for large projects that I'm still flushing out.
lanthissa 2 hours ago [-]
shouldn't typed code that uses functional style be kinda the perfect end game for llms? You can parallelize generation at any granularity, easily ring fence changes, reproduce everything, types give clues to the llm.
aabhay 3 hours ago [-]
Zig is much more type aligned to bun than typescript. And there’s a common interface of C ffi so you could imagine porting it modularly and keeping the test suite in zig
45h2avf 3 hours ago [-]
How do we know it is true? The person in question works for Anthropic. And Zig was on the black list for some time due to its slop skepticism.
It could be another marketing stunt like Mythos, which is so dangerous to release that Antrhopic must be bailed out by the government.
We don't know if the timeline is true, whether Rust experts had a hand in it or even if the reported test suite compliance is true. We are dealing with a company of habitual liars and promoters.
Aurornis 3 hours ago [-]
> How do we know it is true?
The branch is open.
You can check it out and run the tests if you don’t believe it.
christopherwxyz 2 hours ago [-]
Zig isn’t so much on the blacklist because of the culture it carries from its maintainers, but because the ecosystem is no longer easily composed with other GitHub projects/GitHub Actions.
madspindel 2 hours ago [-]
> We are dealing with a company of habitual liars and promoters.
Any sources to back this up?
Tiberium 4 hours ago [-]
I just want to comment that I think it's a good change if we look past the AI involvement.
Bun has had an extremely high amount of crashes/memory bugs due to them using Zig, unlike Deno which is Rust.
Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better
mi_lk 4 hours ago [-]
> Bun has had an extremely high amount of crashes/memory bugs
Any stats/source? Not that I think it's false
> and the ugly parts look uglier (unsafe) which encourages refactoring.
Looks like Bun owes that to itself to some extent, not solely because of the language
dmd 3 hours ago [-]
You want a better source than the actual author of Bun?
nesarkvechnep 3 hours ago [-]
Authors can't exaggerate? Maybe some actual numbers can convince people.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
Not a hard number obviously but a clear indication those issues exist.
qudat 2 hours ago [-]
I don’t understand: just use an agent to find all memory leaks and segfaults. I don’t get the argument if you are gonna vibe code anyway.
With unlimited tokens make it a lint rule or auto formatter.
Ygg2 4 hours ago [-]
If you look at percent of segfault errors in each repo, Bun had a much larger percent. Although don't quote on me.
leecommamichael 4 hours ago [-]
Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool? There is a lot of quality stuff made with C/C++, so what is Zig doing wrong?
aystatic 3 hours ago [-]
> Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?
What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.
mort96 3 hours ago [-]
It's not hallucination, it's a basic extrapolation. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" is the same statement as "using Zig resulted in Bun having an extremely high amount of crashes/memory bugs". It is then natural to ask whether their position is "using Zig results in an extremely high amount of crashes/bugs" in general.
aystatic 3 hours ago [-]
That's a hell of a lot more than "basic extrapolation." You're misrepresenting the original claim to fight against one that's trivially easy to dispute. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" (which unlike Rust, doesn't prevent you from writing them) is a completely different statement than your "using Zig results in an extremely high amount of crashes/bugs." Implying that such a generalization was even on the table is insulting.
Yes, obviously you can write high-quality software in Zig. But does Zig categorically reject the kind of bugs Bun was suffering from? Rust does.
mort96 2 hours ago [-]
The point is that the "extremely high amount of crashes/bugs" is maybe not the fault of Zig after all, as was implied.
fastball 2 hours ago [-]
How software behaves is very obviously downstream of the tools (in this case programming language) used to build it.
mort96 2 hours ago [-]
"Downstream of" is doing a lot of work in that sentence. Language has an effect on, but in no way determines, the reliability of software written in it.
fastball 1 hours ago [-]
Downstream doesn't imply determinism.
mort96 1 hours ago [-]
The original claim is one of determinism. Your use of the term "downstream" is hiding the distinction; it can be read in either way, so it bridges the gap between the position you want to defend ("using Zig causes a higher probability of memory bugs") and the position you're forced to defend ("using Zig results in extremely many memory bugs").
In short, I'm accusing you of doing a motte-and-bailey.
afdbcreid 1 hours ago [-]
Even assuming that's a correct interpretation, does "using C/C++ results in having an extremely high amount of crashes/memory bugs" not true?
mort96 1 hours ago [-]
No, that's probably false by a fairly simple existence proof. If it was true that using C results in an "extremely high amount of crashes/memory bugs", we would expect to not find any substantial pieces of software written in C without an "extremely high amount of crashes/memory bugs". Now where exactly you draw that line is necessarily going to be somewhat arbitrary, but by any definition, I think we can all agree that SQLite does not fit that description. Yet SQLite is written in C. Therefore, we conclude that the statement must be false. QED.
Now C does have some aspects which make it more prone to crashes and memory bugs. The less strong statement of "using C results in a higher propensity for crashes/memory bugs than Rust" is absolutely true, I would argue. And both C++ and Rust inherit some (but not all, and not the same) of the aspects which make C prone to memory bugs. (So does Go, I would argue, but less than C++ and Zig.)
skybrian 3 hours ago [-]
It's generalizing from Bun (which might be especially tricky code) to other software that might not have the similar issues. There are lots of different kinds of software.
leecommamichael 23 minutes ago [-]
You know, I try to ask questions rather than making assertions in order to better my chances at provoking useful thought and conversation.
thayne 2 hours ago [-]
It is much harder to write quality stuff in c/c++ that doesn't have memory bugs (use after free, out of bounds access, use of unitialized memory, double free, memory races, etc.). I wouldn't say it isn't feasible to build high quality software in those languages, but even the highest quality software written in those languages has these types of bugs. Zig is better than c, and maybe a little bit better than c++, especially with respect to spatial memory bugs, but it doesn't provide the same garantees as rust.
jph00 3 hours ago [-]
The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.
This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.
3 hours ago [-]
dminik 3 hours ago [-]
The answer is that C (and by extension Zig, C++) code goes through a hardening process. New code in these languages tends to be unsafe. But bugs and vulnerabilities get squashed over time. Bun gets updated fast and so has a lot of new unsafe code.
afavour 3 hours ago [-]
> There is a lot of quality stuff made with C/C++
There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.
dnautics 3 hours ago [-]
rust does not promise leak safety.
josephg 3 hours ago [-]
True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.
dnautics 3 hours ago [-]
> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?
plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.
Bun's entire history has been a kind of haphazard move as fast as you can story, so...
Barrin92 3 hours ago [-]
it's feasible to write good software but anything on the scale of millions of lines of code will have memory and pointer issues. I've worked in large C++ code bases with people much more experienced and skilled than I was and every single one of them would tell you that at that scale, no matter how economic and simple you program you will produce memory bugs, the smartest person in the world makes errors holding that much stuff in their head.
They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.
margorczynski 3 hours ago [-]
This is correct but people with too big of an ego or affected too much by Dunning-Kruger) will try to say otherwise even when presented with ample evidence. Instead of a valid response you'll get "skill issue" from people that produce segfaulting code on a regular basis.
pjmlp 3 hours ago [-]
It is basically Modula-2 / Object Pascal with C like syntax.
While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.
C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.
Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.
Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.
anthk 21 minutes ago [-]
You would like the T3X language as an exercise to port stuff from Free Pascal too it. In a near future I plan to port two libre text adventures with it, Beyond the Titanic and Supernova. If it fits under T3X, it might run in 'high end' CP/M systems out there.
Beyond these Curses simple games, there's a 6502 assembler and disassembler among a Kim-1 simulator, Micro Common Lisps and whatnot.
chris_st 4 hours ago [-]
And they're clearly marked as `unsafe`, so easy to find, which gives them a nice list of issues to address.
aurareturn 12 hours ago [-]
6 days of work to do this. Even if it doesn't end up becoming meaningful, it shows just how tokens and work done will be linked now and in the future.
It's going to be hard to compete with someone or a company that has more compute. They will just be able to do things you can't.
Aurornis 2 hours ago [-]
Translating a project that includes a good test suite from one language to another is known to be a great case where LLMs work well.
When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.
mezyt 1 hours ago [-]
Great case where rust works well too. I won't cite every famous libs that got rewritten in rust but it wasn't all with LLM.
apitman 32 minutes ago [-]
It's not hard to imagine a future where the only things committed to git repos are tests and specs.
twoodfin 8 hours ago [-]
You could have said the same thing about steam power or electricity. And it’s not just an analogy: The magic of these things is in being universal information engines. You spend capital to build them, using well-understood, scalable techniques, plug them into electricity, and out comes value.
My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
carefulfungi 3 hours ago [-]
Electricity might be a good analogy - but for the other side of this argument.
In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)
suddenlybananas 4 hours ago [-]
>My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.
Electricity looks pretty even. Higher in Europe but they can afford that.
alphabeta3r56 2 hours ago [-]
Due to purchasing power parity, it is actually much hhigher in poorer countries, in that they are absolutely still asking the have nots.
throwaway82012 3 hours ago [-]
[dead]
sdevonoes 31 minutes ago [-]
Unclear. Very good products tend to be about doing one or a few things very well; not about doing tons of stuff. So far, all I see is “Man, Im a 10x engineer now!”, shipping more code but without clear direction and taste. At this point, most of LLM-based work is just noise.
qudat 1 hours ago [-]
Nah. These agents are getting easier and easier to run local. Have you tried Qwen 3.6 27b? It’s insane what it can do compared to its size. Like 100% vibe small projects if you manage context properly.
These models are a race to the bottom just like compute.
nbf_1995 5 hours ago [-]
I can't help but wonder what this cost in USD assuming you paid standard rates from Anthropic. Can someone even ballpark the price?
baq 3 hours ago [-]
Much less than what it’d costs for a team of rust engineers.
This is both amazing and scary; has been for a while now.
BearOso 2 hours ago [-]
It costs several times what it would cost a small team of engineers, even assuming you gave the engineers more time to do it. I'm guessing (wildly) this was around 0.5M USD in compute time. You do get the result quicker, though.
alice-i-cecile 36 minutes ago [-]
Half a million is pretty damn cheap for a full rewrite into Rust of a million line of code codebase.
Supermancho 2 hours ago [-]
10k lines ~$250 in OpenAI API calls (no plan)
45 million lines would get to ~$1.125 mil for the linux kernel.
950k lines for Bun would get to $23,750
use whatever math you like ofc.
Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.
pjmlp 9 hours ago [-]
With less employees....
aurareturn 9 hours ago [-]
Isn’t just one guy?
Defletter 9 hours ago [-]
Exactly
rvz 5 hours ago [-]
This is exactly how Anthropic will market this rewrite towards companies thinking about doing more layoffs.
1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.
Aurornis 2 hours ago [-]
> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.
The entire bun team was only about a dozen people and they wrote it from scratch.
It would not take hundreds of engineers to port the existing codebase to another language.
I think this is a cool experiment, but some of these claims are getting absurd.
baq 3 hours ago [-]
The saving grace here is a rewrite of a project with a good test suite is the sweet spot: LLMs are great at translation and do great with verifiable goals.
I agree it’s still mind blowing compared to before times, though.
Dylan16807 3 hours ago [-]
> would have taken hundreds of engineers more than a year
This is estimating what, 10 lines per day each? No way translating code is anywhere near that slow.
59nadir 2 hours ago [-]
It probably wouldn't take a single person who knew what they were doing more than a year to re-implement Bun in basically anything, by hand and from scratch, i.e. not even looking at source. Writing the code for something you already understand and have built before is incredibly fast.
I'm sure they'll market what you said, but it's so ridiculous that I would hope people would see through this stuff.
xienze 3 hours ago [-]
> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.
Even cheaper would just be to not do it in the first place. Was there a pressing need to rewrite it?
ksec 8 hours ago [-]
I think a lot of people taking this at face value , a lot of this was possible because of the beyond standard extensive and comprehensive test suit previously built.
Jcampuzano2 3 hours ago [-]
It's still an impressive achievement that would have taken even the most competent engineers an exponentially longer time to accomplish.
I just hope it's noted when this is eventually marketed how much human effort went into designing and curating the test suite that even enabled this speed in the first place.
A test suite sort of functions exactly like the ideal scenario for current gen llms. A comprehensive enough test suite essentially forms the spec for agents to implement however they see fit - in this case rust.
You could probably throw away the entire actual source code in certain cases and reimplement the whole thing from scratch just giving an agent access to the tests when it's as well crafted as a project like bun.
scuff3d 3 hours ago [-]
Look what it can do in 6 days!
Ignore the hundreds of thousands of hours put into the original architecture and test suite that made it possible in the first place.
zaptheimpaler 1 hours ago [-]
This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..
oytis 59 minutes ago [-]
But what is the purpose? When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better on some metrics due to advantages of the language. It doesn't hold when LLM does the rewrite, since there is no one who understands the code after that.
It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage
gamegod 2 minutes ago [-]
This is such an insightful comment. It also underscores why these AI companies' marketing efforts are promoting rewrites.
scuff3d 35 minutes ago [-]
You missed the point.
People want to use stuff like this as somehow evidence for AI being able to write entire software systems in a few days. We saw the same shit with the "compiler" they made with a bunch of agents. Literally the only reason it's possible is because the hundreds of thousands of man hours and God knows how much money that was poured into the reference projects befoes the AI got anywhere near it.
To replicate this kind of thing with a green field project would take an absolute ton of spec work and requirements derivation, which will substantially eat into any savings from having AI generate it.
The accomplishment itself is interesting, and unlocks opportunities to do work no one would have bothered with before, but it doesn't represent what a lot of people desperately want it to.
cmrdporcupine 3 hours ago [-]
Exactly this.
I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.
When given the right constraints this kind of thing is entirely conceivable.
However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?
My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.
scuff3d 2 hours ago [-]
Not to mention to even attempt something like this from scratch would take hundreds of hours if spec work. I see it all day everyday in the aerospace sector. Software engineers have absolutely no idea what deriving a design document and all its associated artifacts actually looks like, and they're in for a rude surprise if the industry really does shift hard that direction
tmaly 4 hours ago [-]
Just a cautionary case of porting to Rust using AI
Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...
8note 4 hours ago [-]
i think theres a different lesson to be taken from those cases - the LLM will build to what you give a feedback loop for.
if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.
its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them
alphabeta3r56 2 hours ago [-]
Question is, are our optimization functions well specified enough? (No)
How important is well specified opt function? No one knows. We will find out
So much of the fundamental dynamics of the industry and the job have changed in so little time. Basically over night.
Some days I am so excited at how much I can do now. You can build anything you want, in basically no time! 100% of my software dreams can be a reality.
Some days I am terrified at what's going to happen to the job market.
Suddenly you can get so much with so little. The world only needs so much software.
Is every company that sells software as their core business model going to go out of business?
What will happen if only certain companies or governments get access to the best models?
keeda 49 minutes ago [-]
> The world only needs so much software.
Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.
But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.
I just knew that this field would never get saturated because it was impossible to run out of things to write software for.
But these days...
I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.
EMIRELADERO 15 minutes ago [-]
> Pretty much any cognitive thing we do manually could be done in software.
Yes, although I suggest being careful with that kind of thinking.
Ooh, I hadn't read that one, have put on my list. I couldn't read the page properly because ads keep popping up and making the page jump around... but it seems the linked section was be about displacement of workers? If so, that's always been true of all technology, but that's less a problem with technology and more with the social system it is applied in. I just posted this comment elsewhere that may be relevant: https://news.ycombinator.com/item?id=48078930
wolttam 3 hours ago [-]
Certainly companies and governments will have access to better models than the public (in fact, that's already the case with Mythos). The public will still be able to help themselves with models that are behind the frontier.
pulsartwin 9 hours ago [-]
At the very least, it's interesting to be a bystander observering as efforts like this progress. The first thing it makes me wonder is how comprehensive/high quality the test suite is to begin with. Not to cast aspersions, but even at 100% on all platforms I wonder how confident the Bun team would be in migrating.
nine_k 4 hours ago [-]
> 99.8% of bun’s pre-existing test suite passes on Linux x64 glibc in the rust rewrite
OK, they've got a working prototype, congrats! Now it needs to be put into shape so that all the unsafe blocks are eliminated (maybe with a few tiny exceptions), and the code is turned into maintainable, readable, reasonably idiomatic Rust.
I wonder how long is it going to take.
amarant 3 hours ago [-]
About 2 months, or 60 days, if we go by the old 90/10 rule.
Not sure that rule is even applicable anymore, but I don't have a better heuristic to make guesses by either.
mustache_kimono 1 hours ago [-]
> Now it needs to be put into shape so that all the unsafe blocks are eliminated
> and the code is turned into maintainable, readable, reasonably idiomatic Rust. I wonder how long is it going to take.
This isn't a c2rust rewrite?
ameliaquining 53 minutes ago [-]
That GitHub search only covers the main branch, not the not-yet-merged Rust rewrite; the only Rust code in there is tests for Rust FFI (so that people can write native extension modules for Bun in Rust if they want to).
I have not had time to look at the code myself, but from when this was initially posted to Reddit, IIRC it had around a thousand global mutable variables, which are unsafe to access.
I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.
ameliaquining 1 hours ago [-]
This is the kind of program that would need to have a lot of unsafe even if it had been written in Rust from the very beginning. For comparison, there are about 2600 unsafe blocks in Deno, not counting dependencies.
4 hours ago [-]
jedberg 2 hours ago [-]
Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.
My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.
libria 23 minutes ago [-]
I recall C++ OOP being the new hotness when I started out and C was always contrasted as the old & busted example. Kind of the "Everything-as-an-object will simplify everything" phase. Windows MFC was the new way, then STL.
Java WORA write once, run anywhere was definitely a thing when it came out. Java Applets came out of the woodwork and were the WASM of their day. Even Cisco ran Java for their router UI for a while, which was painful.
More recently, HN went through a period about 10 years ago where every other article ended in " ... written in Go".
The mantra may not have rhymed with "rewrite X in Y" but the spirit was there.
Onavo 1 hours ago [-]
There were no good options previously. It was either C or C++. Most of the other languages were either fringe or had a GC, or had a pseudo runtime GC (Swift). The culture of Java and C# and Go didn't really support the type of low level optimizations needed, even though you could technically do system programming if you restrict yourself to a specific subset of language and cut yourself off from most of the standard library and ecosystem. Nim was unstable. OCaml had the same issues as Go and Java and C#. You simply did not have any options until Rust came along. Oberon was an academic trinket. The less said about the various lisps and forths the better.
OS and embedded programming require bare metal support and data structures that can run standalone in the absence of an OS and standard library, and the ecosystem must exist to support such a style of programming.
Currently Rust has over 10000 crates that would theoretically work just fine in an kernel environment.
I think the industry is moving to English as the programming language, and specifications-context-tdd as the framework for building software.
Many find it distasteful, and many finding liberating. I think it's broadly correlates with how they feel about expressing themselves in english vs say C++.
As a side question, is there anyone who's using LLMs primarily in non-english mode to program? I suspect there's quite a few people using mandarin, but can someone share first-hand account.
SwiftyBug 10 hours ago [-]
I wonder how well Mandarin works for LLM-based programming. On one hand, it's very token efficient as Mandarin script is very dense in meaning. On the other, I suppose this can increase ambiguity.
jamesdutc 2 hours ago [-]
I can speak, read, and write Taiwanese Mandarin (which is likely relatively underrepresented in the training sets and, which is, in my practical experience, materially different in its usage.)
The authoritative answer for this question would best come from the millions (or tens of millions) of Chinese-speakers who are currently using LLMs to write software.
However, it is my suspicion that you would see no advantages using any language other than English. While there is a certain token-level density to written texts, it seems the benefits of this (and the more recent discussion around “caveman talk”) are quite limited.
Furthermore, consider that the vast majority of textbooks, technical documentation, blog posts, StackOverflow answers, &c. are originally in English. Historically, where these have been translated to Chinese, the translations have often been of very poor quality (and the terminology and phraseology is often incomprehensible unless you also understand some English.) I would suspect that this makes up the overwhelming majority of the training sets for these models.
That said, my experience using the most recent models, is that they are surprisingly language-agnostic in a way that surpasses readily-available human capability. For example, I can prompt the LLM to translate English into something that uses German grammar, Chinese vocabulary, and Japanese characters, and I'll get an output that is worse than what a human expert could do… but where am I going to find a multilingual expert?
(Of course, I have so far only ever been impressed that a model could generate an output but never impressed with the output it did generate. Everything—translations, prose, code—seems universally sloppy and bland and muddy.)
So what I would anticipate the biggest benefit for a Chinese-speaker today… is that if they are disinterested in working internationally, they have significantly less dependency on learning English.
arjie 2 hours ago [-]
Character-density and token-efficiency are different things. Latter is data and, therefore, tokenizer specific e.g. take GPT-5's tokenizer o200k_base and run mandarin text and its translation through. Some amount of the time en will beat zh. I just tested with news articles and wikipedia.
After all `def func():` is only 3 tokens on o200k_base.
pyonpyon 9 hours ago [-]
I'm using it 50% English (personal projects)/50% Polish (workplace; reasons being agents.md / team is not that english proficient) and honestly I haven't seen much difference in the output/ambiguity.
Polish prompts tend to be shorter due to the language having a lot of verb forms/conjugations, the only "bad" thing for me is that when it's saying "it broke" it tends to use uncanny / blunt words that make me sometimes laugh.
thedevilslawyer 9 hours ago [-]
Interesting. Some questions: Would you say polish is more dense or less dense than english? It's interesting to hear that code quality is not suffering but the response text is sillier or blunter. Any other descrepenacies compared to English?
pyonpyon 9 hours ago [-]
I would say it certainly can be more dense but even if it's more dense, the tokenizers count it as more. Last time I checked in OpenAI tokenizer for my agents.md it ate 30/40%~ more tokens than the English version at roughly 1:1 meaning.
3 hours ago [-]
nesk_ 2 hours ago [-]
I use French nearly all the time, it works well. Not that I can't write English prompts, but I find it easier to use my native language.
eikenberry 3 hours ago [-]
I think it will eventually be its own dialect of English. Telling LLMs what to do is better using not quite normal English and I think this will continue until it isn't recognizable as natural English anymore, but a new fuzzy programming language (probably >1).
tayo42 3 hours ago [-]
>Telling LLMs what to do is better using not quite normal English
What are your prompts like?
mohamedkoubaa 2 hours ago [-]
I'm teaching my kids to be fluent in tokenese
nothinkjustai 3 hours ago [-]
Natural language doesn’t have the precision required for building systems. We already have languages for specifying systems precisely. It’s called “code”…
pjmlp 9 hours ago [-]
I agree, and those are still too focused on code generation for specific languages are fighting the last war.
It is the revenge of UML modeling.
Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.
Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.
_woland 8 hours ago [-]
I'm using it in english / albanian. Not much difference really. Impressive.
15 minutes ago [-]
afavour 4 hours ago [-]
Presumably the biggest loser in all this is Zig, I only know of the language because of Bun.
But the timescale still gives me pause… just because AI lets us convert a codebase in 6 days doesn’t mean it’s wise. There are surely a lot of downstream implications! It’s always felt a little like Bun is making up a plan as it goes along (and maybe that’s unfair), this seems to underline the point.
nine_k 3 hours ago [-]
Zig is a great low-level language. It's much better than C, while not being so much larger as e.g. Rust or C++. AFAICT Zig does well in embedded development, and should continue to do so. Note that Zig is not even 1.0 yet.
internet2000 3 hours ago [-]
Yeah but now they got the fame of the language that fumbled the ball because of an overly onerous anti-AI stance.
Chris2048 3 hours ago [-]
It's been repeated many times that the rejection of the Bun PR was unrelated to their AI-policy. It's also not clear they've "fumbled the ball" given how many projects are complaining about slop PRs.
scuff3d 3 hours ago [-]
Lol. What a goofy take.
wolttam 2 hours ago [-]
These tools let you get a massive codebase functional in 6 days. But, presumably, there's no better language to target than Rust (in terms of safety/performance), and therefore the rest of time can be spent making the birthed-in-6-days codebase better.
iwontberude 2 hours ago [-]
But the author said "the code truly works, passing the test suite on Linux and soon other platforms" which just sounds really wise.
taosx 37 minutes ago [-]
That's amazing, over time I got a few memory related crashes w/ bun but have deep respect for the performance work put in. Hopefully Rust's compiler will help even more.
Off: I'm wondering if now when more JS finds place on our machines and bundle size is 2nd place for most, would a revival of prepack or projects in the same vein would be worth it, especially with agents.
mikebelanger 60 minutes ago [-]
Interesting that ports can be written so quickly with AI. But that aside, I have to ask...why? You want a super performant bundler/runtime/package manager written in rust with TS support, Deno has this already.
boring-human 2 hours ago [-]
I harbor some hope that the (sad) fall of human SWEs will at least be accompanied by language defragmentation. We don't need 38 systems languages once human taste is mostly out of the picture.
akagusu 7 hours ago [-]
What does this mean for Zig?
Few big popular projects use Zig, if they start to move away from it, what Zig's future will look like?
NewsaHackO 2 hours ago [-]
I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig. Worse, the project felt like Zig wasn't meeting their needs, to the point they abandoned Zig and rewrote their entire project in a different language. Really bad signal for anyone thinking of using Zig for a big project. It is still in beta, but has there been any situation like this, where a upcoming programming language was abandoned by its biggest external project and still was able to be considered a successful language after that?
kennykartman 45 minutes ago [-]
Nobody knows. Here's my two cents.
Zig is a very interesting LOW level language, but honestly I think it should be considered for what it is: a better C. I don't think it fits for anything that someone would have written in C++, Java, Haskell or C#. Instead, Rust is competitive with all of these languages when it comes to safety, abstractions and speed. And also C and Zig itself.
Zig has a couple very interesting ideas that make it stand out: comptime and the zig build system.
Alas, Zig is still far from being stable. Rust came out to the public in 2012 and became stable (1.0) in 2015. Zig came out to the public in 2016, and it's 10 years now and someone says it's still years away from 1.0.
So, while rust took 3 years of public development to become stable, zig is taking 10/15 years. I love the language, but TBH I don't see a great future ahead, especially with LLMs advancements that can use safer languages to do the same work. There's no point in risking more memory bugs when the effort for writing code is the same.
SwellJoe 3 hours ago [-]
It means nothing for Zig. Zig isn't even out of beta yet.
jadbox 3 hours ago [-]
Jarred has already said on Twitter that this was only an experiment for comparisons and very, very unlikely that they'd switch to Rust.
I'm a full time Zig developer, and I see this as an absolute win. I know Jarred has said in the past he feels Zig makes him more productive, but I also think it's fair to say Bun was programmed in a way that's quite cavalier towards buffer overruns. I think Jarred and the Oven team will have significantly better luck with Rust.
Some commenters have remarked they only heard of Zig because of Bun, therefore this is bad for Zig. Not so. In my opinion, there has always been a mismatch. I say with no ill will that a divorce is likely better for both parties. I genuinely believe Bun will be better software once fully converted to Rust.
onlyrealcuzzo 3 hours ago [-]
And here I am trying to get an LLM to add types to a 100k line Ruby repository for 2 days, and it's not going so hot...
adsharma 2 hours ago [-]
A SMT solver may work better.
onlyrealcuzzo 1 hours ago [-]
Will that work if my codebase is filled with nils it shouldn't be filled with, and HashMaps instead of structs with a loosely defined schema, and tuples masquerading as arrays?
lujeni_ 2 hours ago [-]
No doubt on my side porting was "easy". What I’d find interesting is the ability to maintain and properly care for the code over time for the next iterations.
Do we eventually end up with a codebase that nobody truly understands in depth anymore, where everything is generated and modified through GenAI?
Thanks for the sharing
oytis 47 minutes ago [-]
Yeah, that's my issue with llm code. If we imagine a future without human programmers - sure, go ahead, we are not there yet, but maybe it's possible.
But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially
ec109685 1 hours ago [-]
There is no way a port this massive will have human code reviews.
If this succeeds, there is no stopping AI given it will have crossed the rubicon of human bottlenecks.
Why not? I think we are perfectly capable on generating a test and validation environment that we can use for correctness. Most likely llms could do this better than engineers with zero to none domain and language knowledge can do these days. From that point on, rewrites would become feasable (not easy, feasable).
3 hours ago [-]
dangoodmanUT 2 hours ago [-]
If this goes through, it feels like it will stoke rust on zig violence
kennykartman 36 minutes ago [-]
Sadly, yes. I feel too much "violence" on both parts.
Honestly, Zig community seems the most bitter for whatever reason, while on the Rust side it seems to me that are simply overstating how great the language is and are pushy in trying to convince the other of their ideas.
If this goes through, we can all take SWE lessons from it, but I think the communities will suffer.
arjie 3 hours ago [-]
This is remarkable. Man, there are all those ancient things that "we've lost the source code for". One time, in a past job 10 years ago we were reimplementing something that was lost to the sands of time, using an out of date spec it had used. It was such a tedious job with verification but we got there. Amazing how easy that would be today.
thfuran 1 hours ago [-]
I don't think this kind of thing works nearly so well without a comprehensive test suite or the ability to easily use the reference version as a test harness. The typical enterprise relic for which no specification or source remains almost surely lacks the former and probably isn't very amenable to the latter.
languid-photic 2 hours ago [-]
would be fun to do zig -> rust -> zig and to measure the delta
(in a VAE-ish way, kl div on the embeddings?)
languid-photic 2 hours ago [-]
also feels like a good posttraining task
pbohun 3 hours ago [-]
How many tokens did this port consume?
allthetime 5 minutes ago [-]
Bun is owned by anthropic and so has access to Mythos & unlimited tokens.
The answer is... more than any of us could likely afford.
hacker_88 29 minutes ago [-]
Merge with Deno
0-bad-sectors 8 hours ago [-]
Interesting! I wonder how the performance is compared to the Zig version
m4rtink 9 hours ago [-]
What license is this ? Let me guess, its is no GPL...
scared_together 8 hours ago [-]
Unlike the GNU coreutils rewrite in Rust, the Bun rewrite in Rust is being undertaken by the owners of the project.
Hmm, that's unfortunate - why does so much Rust stuff seem to default to MIT/BSD ? Just because Mozilla used that for most of the Rust stuff ?
Do developers using Rust even know the difference ? Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P
bob001 1 hours ago [-]
> Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P
I'd assume the Bun people got a bit more than a thanks when Anthropic acquired them. :)
You also can't take your GPL code (unless you do CLAs with all contributors), convert it to closed source yourself and make a massive VC funded startup around it. Which is about the only other way anyone makes better money from open source than by just working for a big tech company.
conradludgate 2 hours ago [-]
I'm very aware when I pick Apache-2. I want attribution for my work, but I don't care about open source purity. I respect closed source software and I put my open source code up for free because I don't care to profit off of my hobbies.
johnny22 2 hours ago [-]
for the same reason most ruby and javascript/typescript stuff is. Heck, even most python.
Most of them never got into the GPL in the first place.
raincole 2 hours ago [-]
Your guess is correct! Congrats. Bun itself is not GPL either by the way. Oh, rust compiler itself isn't GPL either.
Still, ~1M LOC ported in a work week (400 LOC/min, wtf?) and almost all of it working is pretty wild. I hope the guy managed to maintain normal function, cause I found that getting into the flow but with AI is even more self-consuming and intoxicating than without it, which was already potentially rather rough.
arto 3 hours ago [-]
The fastest large-scale rewrite in the history of software engineering, likely
pdhborges 4 hours ago [-]
Curious how the test suite was applied. Was it ported from Zig to Rust beforehand?
190n 3 hours ago [-]
Almost all of Bun's tests are written in JavaScript run in Bun itself.
suck-my-spez 2 hours ago [-]
Serious question… Who’s going to want to run a vibe coded runtime in production?
I don’t see how this is a good look for Bun?
kennykartman 30 minutes ago [-]
One should care about tests more than how code was coded.
If I had a codebase with lots of tests and asked someone else to rewrite it to another language passing the same test suite, I honestly wouldn't expect a great quality job.
I say this because it happened 3 times in the company I work for: we conducted experiments by tasking different companies to rewrite the same code in another language. All of them passed (most) of the tests, but code quality was low. If the job is a black box, rely on the I/O to determine quality, not the inner workings.
zaptheimpaler 1 hours ago [-]
I just see a ton of reflexive AI hate here. I don't care if it was vibe coded, if it passes the entire test suite and was vibe coded by the original authors, I trust it as much as the original Bun. These are Jarred's words about it:
> it’s basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them. and the ugly parts look uglier (unsafe) which encourages refactoring.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
This makes me trust it more, not less.
AtNightWeCode 2 hours ago [-]
Kinda crazy to use AI to switch from zig to rust in a tool that runs js. Bin bun and use a real lang to begin with. No reason to have that extra layer anymore.
dare944 43 minutes ago [-]
Lol, I had a similar thought as well, but more along the lines of "We're coming for you next, JavaScript!"
But the effort is certainly an exquisite rearrangement of the deck chairs, no?
matrix12 1 hours ago [-]
will this mean opencode is finally portable?
dlenski 3 hours ago [-]
Deleted
logicprog 3 hours ago [-]
@simonw explains how hilariously misguided that paper is in one of the top comments, and how it doesn't apply remotely to a real agent harness. Plus it's not even clearly relevant here, because the model isn't trying to regurgitate the original document, but generate a new one, and there are guardrails to put it back on track in the form or a compiler and tests. Also, the test suite is very thorough, and pre-existing, and the vast majority passes already. This is skepticism for the sake of it.
raincole 2 hours ago [-]
Perhaps you can elaborate on how your comment is relevant to the Bun's experiment here.
timetraveller26 4 hours ago [-]
3 years from now: Linux ported to Rust in 6 days.
And on the seventh day Claude ended His work which He had done, and He rested on the seventh day from all His work which He had done
kennykartman 27 minutes ago [-]
That's a fun point. I honestly don't think it will happen in 3 years, but I think it will surely doable in 10.
More interestingly: will we need to care about the code at all, at that point?
amai 2 hours ago [-]
Bunner
the__alchemist 46 minutes ago [-]
Bun alert!
born-jre 4 hours ago [-]
Being anthropic accuired project does he have access to mythos or it’s normal Claude we plebs have access to
tempest_ 3 hours ago [-]
This is entirely possible with Claude as it existed even last year.
The LLMs are quite good at re-writes and even better when provided an 'oracle' like a well rounded test suite or existing implementation to work against.
Its part of the reason we keep seeing "I rewrote <library> in <language>" posts on hackernews and when you look at the repo its more like I prompted claude to rewrite this repo in rust or whatever.
bel8 3 hours ago [-]
As an Anthropic acquihire, not only does he have access to every model and service but he probably has infinite tokens available.
Bun powers Claude.
rishabhaiover 3 hours ago [-]
Also, isn't it a great ad for Anthropic itself? One wonders
nine_k 3 hours ago [-]
Indeed, knowing the amount of tokens spent would be very interesting.
ekjhgkejhgk 3 hours ago [-]
Explain it for dummies. Isn't Zig a programming language? Why are they re-writting a programming language in another programming language?
conradludgate 3 hours ago [-]
They're not rewriting zig. They're rewriting bun, which is currently written in zig
sergiotapia 2 hours ago [-]
jared's post is singlehandedly shitting on Zig's reputation. not good juju for him to post like that.
"I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues"
bun was zig's poster child. if it moves away, it becomes yet another random language like nim or crystal.
lerp-io 3 hours ago [-]
[flagged]
jdw64 2 hours ago [-]
[dead]
redsocksfan45 3 hours ago [-]
[dead]
black_13 11 hours ago [-]
[dead]
rvz 12 hours ago [-]
[flagged]
vintagedave 12 hours ago [-]
> absolute position of hating something such as AI and progress
Most takes I've seen are far more nuanced.
Key is that 'progress' has a positive connotation. It is different from change. Mere change - such as new inventions - may not necessarily be aligned with progress in a field, society, etc.
Change may be inevitable, but it's up to us humans to sculpt it into progress.
rvz 11 hours ago [-]
But I am talking about Zig and others who have the same stance. Zig has a very strict No LLM / AI contribution policy and it likely got in the way of the Bun maintainers at Anthropic. From [0]
>> No LLMs for issues.
>> No LLMs for patches / pull requests.
>> No LLMs for comments on the bug tracker, including translation.
They don't hate it. There's no antagonism that I know of there. I believe they want it to be fully human-authored and want low-hanging fruit items to be good onboarding for developers, not targeted by AI contributions. Simon Willison wrote a good blog post on it: https://simonwillison.net/2026/Apr/30/zig-anti-ai/
None of this is, in the original comment's text, "hating... AI".
heldrida 11 hours ago [-]
Thats true, but the author might have decided on its own. Not everything is a marketing plan.
roschdal 2 hours ago [-]
Meh. I prefer Java, all hours of the day, every day of the week.
parliament32 3 hours ago [-]
Ew
pjmlp 3 hours ago [-]
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.
Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?
heldrida 13 hours ago [-]
An update on Bun’s experimental migration from Zig to Rust:
The Rust rewrite now passes 99.8% of Bun’s pre-existing Linux x64 glibc test suite.
Rendered at 22:49:25 GMT+0000 (Coordinated Universal Time) with Vercel.
From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?
[0] https://ngspice.sourceforge.io/
That sounds like a perfectly functional project, to me.
Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.
haven't used zig...(only used rust)
but zig doesn't solve those problems?
I am of the opinion that it is horses for courses and not a universal better proposition.
Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…
While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.
1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.
2) defer[0] allows you to collocate the the freeing of resources with code.
That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.
I was working on some eBPF code in C and did really miss zig.
For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.
[0] https://zig.guide/language-basics/defer/
I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.
E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.
If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.
It isn't as binary as you think.
What are you asking for exactly?
It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.
https://github.com/ityonemo/clr
Bun: Hold my beer
Insert something about monkeys, typewriters, and Shakespeare here.
> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.
We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.
Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.
It’s not that anthropic/google/openai/etc are unavoidable
Every tech you mentioned is absolutely governed by multibillion dollar companies. Something like 75-85% of OSS code is contributed by employees doing their day job. Most Linux and Postgres contributions come from those same employees. HTTP and TCP/IP are managed by standard bodies and industry working groups that, you guessed it, are governed by multibillion dollar companies. Red Hat and IBM are responsible for 40-60% of contributions to Qemu.
Some of the inner circle move to corporations to increase their power and are joined by corporate developers (sometimes their bosses) to take over the project.
A lot of corporate OSS development are entirely unnecessary rewrites or simple things like release management. So I'd put the number of useful code by employees much lower.
But governed, hell yeah, I agree. The corporations crack the whip and oppress real contributors.
Believe it or not, for some of us it’s not “the whole damn point”.
https://news.ycombinator.com/item?id=47945021
Your kind of negativity is pathological.
This is one of my problems with academia: people only sharing results when they're positive and complete. I want to hear about what people tried that didn't work, and see the string of failures. People are already inclined to avoid sharing their work out of concern that they'll be judged--let's not encourage that behavior, please.
And to have an opinion on that you yourself need to be an expert or at least experienced. Otherwise you’re kind of just not capable of judging
The AI companies and their associates are beginning to surpass that level of denials and lies.
What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.
I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.
would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?
they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.
zig adopting a strict 'no LLM' policy affects the LLM vendors.
I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.
Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.
https://news.ycombinator.com/item?id=47945021
They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)
And now like a winey baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.
It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.
https://tsz.dev
Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.
I question this. Yes, strong enforcement of invariants at compile time helps the LLM generate functional code since it gets rapid feedback and retraces as opposed to generating buggy code that fails at runtime in edge cases.
On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code. If the initial architecture is bad or lacking, growing the code base incrementally as LLMs typically do will tend towards spaghettification. So I fear a program that compiles and even runs ok, but no longer human readable or maintainable.
This may be so, but LLMs are great at slogging through such tedious repercussions.
I would say if the language prevents sloppy intermediate states, that actually makes it more amenable to AI; if you just half-ass a refactor into a conceptually inconsistent state, it’s possible for bad tests to fail to catch it in Python, say. But if many such incomplete states are just forbidden, then the compiler errors provide a clean objective function that the LLM can keep iterating on.
Are you saying this out of personal experience or just hypothesizing? I am working on a large, complex rust project with Claude Code and do not experience this at all.
- write sleek operator-overloading-based code for simple mathematical operations on your custom pet algebra
- decide that you want to turn it into an autograd library [0]
- realise that you now need either `RefCell` for interior mutability, or arenas to save the computation graph and local gradients
- realise that `RefCell` puts borrow checks on the runtime path and can panic if you get aliasing wrong
- realise that plain arenas cannot use your sleek operator-overloaded expressions, since `a + b` has no access to the arena, so you need to rewrite them as `tape.sum(node_a, node_b)`
- cry
This was my introduction to why you kinda need to know what you will end up building with Rust, or suffer the cascade refactors. In Python, for example, this issue mostly wouldn't happen, since objects are already reference-like, so the tape/graph can stay implicit and you just chug along.
I still prefer Rust, just that these refactor cascades will happen. But they are mechanically doable, because you just need to 'break' one type, and let an LLM correct the fallout errors surfaced by the compiler till you reach a consistent new ownership model, and I suppose this is common enough that LLM saw it being done hundreds of times, haha.
[0] https://github.com/karpathy/micrograd
I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.
This rewrite is >750k lines of Rust
I've also seen the benefits of Rust for this too. And making the bet that my pg experience will help me make good design choices around many of the things people have been having trouble with in pg for a long time[1]. Excited to see AI make it more possible to improve complex pieces of software than has historically been practical.
[0] https://github.com/malisper/pgrust [1] https://malisper.me/the-four-horsemen-behind-thousands-of-po...
curious about your workflow for running all these accounts. different harnesses in parallel? manually switching in codex? 5.5pro only?
what works for you?
Example of a Claude Code session after 2 hours of "Crunching" that came out without results https://github.com/mohsen1/tsz/pull/4868 (Edit I force pushed to PR to solve the problem, you can see the initial refuse message in the initial version of PR description)
Funny thing is, the last percent of the test have been so hard to work on that Opus 4.7 routinely bails and says "it's too involved or complicated" so I had to add prompts specifically asking it not to bail.
GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.
GPT also left me with broken tests/code that I had to iterate on manually, Claude is much better about reasoning through code. Primarily Python.
Pretty impressive that it is faster than the Go version already.
It's much faster in single file benchmarks (3 to 5x)
https://tsz.dev/benchmarks/micro
I have optimizations planned for large projects that I'm still flushing out.
It could be another marketing stunt like Mythos, which is so dangerous to release that Antrhopic must be bailed out by the government.
We don't know if the timeline is true, whether Rust experts had a hand in it or even if the reported test suite compliance is true. We are dealing with a company of habitual liars and promoters.
The branch is open.
You can check it out and run the tests if you don’t believe it.
Any sources to back this up?
Bun has had an extremely high amount of crashes/memory bugs due to them using Zig, unlike Deno which is Rust.
Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better
Any stats/source? Not that I think it's false
> and the ugly parts look uglier (unsafe) which encourages refactoring.
Looks like Bun owes that to itself to some extent, not solely because of the language
Around 2500 issues with segmentation fault.
https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
119 open, 885 closed
https://github.com/denoland/deno/issues?q=is%3Aissue%20state...
10 open, 46 closed
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
Not a hard number obviously but a clear indication those issues exist.
With unlimited tokens make it a lint rule or auto formatter.
What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.
Yes, obviously you can write high-quality software in Zig. But does Zig categorically reject the kind of bugs Bun was suffering from? Rust does.
In short, I'm accusing you of doing a motte-and-bailey.
Now C does have some aspects which make it more prone to crashes and memory bugs. The less strong statement of "using C results in a higher propensity for crashes/memory bugs than Rust" is absolutely true, I would argue. And both C++ and Rust inherit some (but not all, and not the same) of the aspects which make C prone to memory bugs. (So does Go, I would argue, but less than C++ and Zig.)
This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.
There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.
plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.
Bun's entire history has been a kind of haphazard move as fast as you can story, so...
They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.
While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.
C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.
Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.
Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.
https://t3x.org/t3x/0/index.html
https://t3x.org/t3x/0/t3xref.html
Beyond these Curses simple games, there's a 6502 assembler and disassembler among a Kim-1 simulator, Micro Common Lisps and whatnot.
It's going to be hard to compete with someone or a company that has more compute. They will just be able to do things you can't.
When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.
My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)
Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.
Electricity looks pretty even. Higher in Europe but they can afford that.
These models are a race to the bottom just like compute.
This is both amazing and scary; has been for a while now.
45 million lines would get to ~$1.125 mil for the linux kernel.
950k lines for Bun would get to $23,750
use whatever math you like ofc.
Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.
1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.
The entire bun team was only about a dozen people and they wrote it from scratch.
It would not take hundreds of engineers to port the existing codebase to another language.
I think this is a cool experiment, but some of these claims are getting absurd.
I agree it’s still mind blowing compared to before times, though.
This is estimating what, 10 lines per day each? No way translating code is anywhere near that slow.
I'm sure they'll market what you said, but it's so ridiculous that I would hope people would see through this stuff.
Even cheaper would just be to not do it in the first place. Was there a pressing need to rewrite it?
I just hope it's noted when this is eventually marketed how much human effort went into designing and curating the test suite that even enabled this speed in the first place.
A test suite sort of functions exactly like the ideal scenario for current gen llms. A comprehensive enough test suite essentially forms the spec for agents to implement however they see fit - in this case rust.
You could probably throw away the entire actual source code in certain cases and reimplement the whole thing from scratch just giving an agent access to the tests when it's as well crafted as a project like bun.
Ignore the hundreds of thousands of hours put into the original architecture and test suite that made it possible in the first place.
It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage
People want to use stuff like this as somehow evidence for AI being able to write entire software systems in a few days. We saw the same shit with the "compiler" they made with a bunch of agents. Literally the only reason it's possible is because the hundreds of thousands of man hours and God knows how much money that was poured into the reference projects befoes the AI got anywhere near it.
To replicate this kind of thing with a green field project would take an absolute ton of spec work and requirements derivation, which will substantially eat into any savings from having AI generate it.
The accomplishment itself is interesting, and unlocks opportunities to do work no one would have bothered with before, but it doesn't represent what a lot of people desperately want it to.
I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.
When given the right constraints this kind of thing is entirely conceivable.
However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?
My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.
https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...
Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...
if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.
its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them
How important is well specified opt function? No one knows. We will find out
LLMs work best when the user defines their acceptance criteria first - https://news.ycombinator.com/item?id=47283337 - March 2026 (422 comments)
So much of the fundamental dynamics of the industry and the job have changed in so little time. Basically over night.
Some days I am so excited at how much I can do now. You can build anything you want, in basically no time! 100% of my software dreams can be a reality.
Some days I am terrified at what's going to happen to the job market.
Suddenly you can get so much with so little. The world only needs so much software.
Is every company that sells software as their core business model going to go out of business?
What will happen if only certain companies or governments get access to the best models?
Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.
But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.
I just knew that this field would never get saturated because it was impossible to run out of things to write software for.
But these days...
I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.
Yes, although I suggest being careful with that kind of thinking.
https://www.orwell.ru/library/novels/The_Road_to_Wigan_Pier/...
OK, they've got a working prototype, congrats! Now it needs to be put into shape so that all the unsafe blocks are eliminated (maybe with a few tiny exceptions), and the code is turned into maintainable, readable, reasonably idiomatic Rust.
I wonder how long is it going to take.
Not sure that rule is even applicable anymore, but I don't have a better heuristic to make guesses by either.
All the unsafe seems to be FFI?
https://github.com/search?q=repo%3Aoven-sh%2Fbun+unsafe+lang...
> and the code is turned into maintainable, readable, reasonably idiomatic Rust. I wonder how long is it going to take.
This isn't a c2rust rewrite?
The rewrite's in https://github.com/oven-sh/bun/tree/claude/phase-a-port. By running the following command on it, I count about 14,000 unsafe blocks:
I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.
My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.
Java WORA write once, run anywhere was definitely a thing when it came out. Java Applets came out of the woodwork and were the WASM of their day. Even Cisco ran Java for their router UI for a while, which was painful.
More recently, HN went through a period about 10 years ago where every other article ended in " ... written in Go".
The mantra may not have rhymed with "rewrite X in Y" but the spirit was there.
OS and embedded programming require bare metal support and data structures that can run standalone in the absence of an OS and standard library, and the ecosystem must exist to support such a style of programming.
Currently Rust has over 10000 crates that would theoretically work just fine in an kernel environment.
https://crates.io/categories/no-std
Many find it distasteful, and many finding liberating. I think it's broadly correlates with how they feel about expressing themselves in english vs say C++.
As a side question, is there anyone who's using LLMs primarily in non-english mode to program? I suspect there's quite a few people using mandarin, but can someone share first-hand account.
The authoritative answer for this question would best come from the millions (or tens of millions) of Chinese-speakers who are currently using LLMs to write software.
However, it is my suspicion that you would see no advantages using any language other than English. While there is a certain token-level density to written texts, it seems the benefits of this (and the more recent discussion around “caveman talk”) are quite limited.
Furthermore, consider that the vast majority of textbooks, technical documentation, blog posts, StackOverflow answers, &c. are originally in English. Historically, where these have been translated to Chinese, the translations have often been of very poor quality (and the terminology and phraseology is often incomprehensible unless you also understand some English.) I would suspect that this makes up the overwhelming majority of the training sets for these models.
That said, my experience using the most recent models, is that they are surprisingly language-agnostic in a way that surpasses readily-available human capability. For example, I can prompt the LLM to translate English into something that uses German grammar, Chinese vocabulary, and Japanese characters, and I'll get an output that is worse than what a human expert could do… but where am I going to find a multilingual expert?
(Of course, I have so far only ever been impressed that a model could generate an output but never impressed with the output it did generate. Everything—translations, prose, code—seems universally sloppy and bland and muddy.)
So what I would anticipate the biggest benefit for a Chinese-speaker today… is that if they are disinterested in working internationally, they have significantly less dependency on learning English.
After all `def func():` is only 3 tokens on o200k_base.
Polish prompts tend to be shorter due to the language having a lot of verb forms/conjugations, the only "bad" thing for me is that when it's saying "it broke" it tends to use uncanny / blunt words that make me sometimes laugh.
What are your prompts like?
It is the revenge of UML modeling.
Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.
Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.
But the timescale still gives me pause… just because AI lets us convert a codebase in 6 days doesn’t mean it’s wise. There are surely a lot of downstream implications! It’s always felt a little like Bun is making up a plan as it goes along (and maybe that’s unfair), this seems to underline the point.
Off: I'm wondering if now when more JS finds place on our machines and bundle size is 2nd place for most, would a revival of prepack or projects in the same vein would be worth it, especially with agents.
Few big popular projects use Zig, if they start to move away from it, what Zig's future will look like?
Zig is a very interesting LOW level language, but honestly I think it should be considered for what it is: a better C. I don't think it fits for anything that someone would have written in C++, Java, Haskell or C#. Instead, Rust is competitive with all of these languages when it comes to safety, abstractions and speed. And also C and Zig itself.
Zig has a couple very interesting ideas that make it stand out: comptime and the zig build system.
Alas, Zig is still far from being stable. Rust came out to the public in 2012 and became stable (1.0) in 2015. Zig came out to the public in 2016, and it's 10 years now and someone says it's still years away from 1.0.
So, while rust took 3 years of public development to become stable, zig is taking 10/15 years. I love the language, but TBH I don't see a great future ahead, especially with LLMs advancements that can use safer languages to do the same work. There's no point in risking more memory bugs when the effort for writing code is the same.
Some commenters have remarked they only heard of Zig because of Bun, therefore this is bad for Zig. Not so. In my opinion, there has always been a mismatch. I say with no ill will that a divorce is likely better for both parties. I genuinely believe Bun will be better software once fully converted to Rust.
Thanks for the sharing
But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially
If this succeeds, there is no stopping AI given it will have crossed the rubicon of human bottlenecks.
Cannot imagine this agent rewrite had anyone review any the code (you can’t at that speed).
I’m positive this will go extremely well :p
---
I wonder how much of this is original size vs rust requiring verbosity vs the LLM being verbose in general.
Not a criticism, I do believe language translation it's the one field that AI is mature enough to near one shot projects.
https://research.ibm.com/publications/enterprise-scale-cobol...
Honestly, Zig community seems the most bitter for whatever reason, while on the Rust side it seems to me that are simply overstating how great the language is and are pushy in trying to convince the other of their ideas.
If this goes through, we can all take SWE lessons from it, but I think the communities will suffer.
(in a VAE-ish way, kl div on the embeddings?)
The answer is... more than any of us could likely afford.
That said, yes, you’re correct that Bun isn’t GPL: https://github.com/oven-sh/bun?tab=License-1-ov-file
Do developers using Rust even know the difference ? Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P
I'd assume the Bun people got a bit more than a thanks when Anthropic acquired them. :)
You also can't take your GPL code (unless you do CLAs with all contributors), convert it to closed source yourself and make a massive VC funded startup around it. Which is about the only other way anyone makes better money from open source than by just working for a big tech company.
Most of them never got into the GPL in the first place.
inb4 .unwrap() / slice / etc hell + livelocks & deadlocks + resource leaks & toctou bugs + larger exposure to supply chain attacks
Still, ~1M LOC ported in a work week (400 LOC/min, wtf?) and almost all of it working is pretty wild. I hope the guy managed to maintain normal function, cause I found that getting into the flow but with AI is even more self-consuming and intoxicating than without it, which was already potentially rather rough.
I don’t see how this is a good look for Bun?
If I had a codebase with lots of tests and asked someone else to rewrite it to another language passing the same test suite, I honestly wouldn't expect a great quality job.
I say this because it happened 3 times in the company I work for: we conducted experiments by tasking different companies to rewrite the same code in another language. All of them passed (most) of the tests, but code quality was low. If the job is a black box, rely on the I/O to determine quality, not the inner workings.
> it’s basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them. and the ugly parts look uglier (unsafe) which encourages refactoring.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
This makes me trust it more, not less.
But the effort is certainly an exquisite rearrangement of the deck chairs, no?
And on the seventh day Claude ended His work which He had done, and He rested on the seventh day from all His work which He had done
More interestingly: will we need to care about the code at all, at that point?
The LLMs are quite good at re-writes and even better when provided an 'oracle' like a well rounded test suite or existing implementation to work against.
Its part of the reason we keep seeing "I rewrote <library> in <language>" posts on hackernews and when you look at the repo its more like I prompted claude to rewrite this repo in rust or whatever.
Bun powers Claude.
"I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues"
bun was zig's poster child. if it moves away, it becomes yet another random language like nim or crystal.
Most takes I've seen are far more nuanced.
Key is that 'progress' has a positive connotation. It is different from change. Mere change - such as new inventions - may not necessarily be aligned with progress in a field, society, etc.
Change may be inevitable, but it's up to us humans to sculpt it into progress.
>> No LLMs for issues.
>> No LLMs for patches / pull requests.
>> No LLMs for comments on the bug tracker, including translation.
[0] https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy
The Bun pull request was refused for additional reasons: 'AI is entirely beside the point here...': https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
None of this is, in the original comment's text, "hating... AI".
As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.
Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?
The Rust rewrite now passes 99.8% of Bun’s pre-existing Linux x64 glibc test suite.