Still writing the blog post about this. Will share more details.
For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.
janice1999 33 minutes ago [-]
I'm curious how much this would cost a paying customer. Can you please give us an estimate?
pulsartwin 44 minutes ago [-]
Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?
eddiewithzato 46 minutes ago [-]
I can hope this will lead to little to no memory issues in using bun as a web server
embedding-shape 28 minutes ago [-]
I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.
git rev-parse HEAD && ag "unsafe" src | wc -l
19d8ade2c6c1f0eeae50bd9d7f2a4bf4a2551557
14865
xiphias2 2 hours ago [-]
I'm actually excited for somebody trying experimenting with automated translation, but I'm afraid this will be lots of backwards compatibility issues.
I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.
The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.
tarruda 2 hours ago [-]
> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves
Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.
GPT/Codex is more honest in this regard.
InsideOutSanta 1 hours ago [-]
Yeah, Claude is very creative in finding ways of "solving" problems that go against what the user probably intended.
Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.
rzmmm 1 hours ago [-]
I doubt it will end up as stable release very soon, but I'm happy to be proven wrong. I have some skepticism about this whole rewrite, Jarred Sumner has enormous internet following and it feels like an ad.
fragmede 53 minutes ago [-]
How do you wash to define ad, and why does it matter? If I tell you I had lunch, I mean. okay, great. If I tell you I had a delicious Coca-Cola with my lunch, sure. If I happen to work at Coca-Cola, does that now become an ad? And what level does it become an issue? And I what is the issue?
q3k 1 hours ago [-]
> solving the ,,tests not pass'' problem by changing the tests themselves
This is great! Just add a random sleep(1) to a test, don't worry about it, it's going to be fine!
onli 40 minutes ago [-]
On the other hand, the sleep fits better to the test description, "should allow reading stdout after a few milliseconds". Even if 1 != 'a few'. It's possible the part of the commit reverted here, https://github.com/oven-sh/bun/commit/a42bf70139980c4d13cc55..., defeated the purpose of the test by removing the sleep. I don't think adding the sleep back is an example of AI cheating.
Strange test though either way.
robryan 38 minutes ago [-]
To be fair the commit message `revert proc.exited change in spawn.test.ts` suggests the sleep was there originally.
Imustaskforhelp 1 hours ago [-]
> I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.
Wow, This is definitely quite something for sure.
Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.
xiphias2 1 hours ago [-]
It's OK, we'll see how it goes. He and Antropic are giving it us for free, and nowdays just forking the old version is easy if a project needs that. Even maintenance is much easier using LLMs.
I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.
I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.
Still, those banks / enterprises won't appreciate the number of unit test changes.
And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.
Jarred 15 minutes ago [-]
[dead]
2 hours ago [-]
tkel 3 hours ago [-]
Turns out "its just an experiment, you all are overreacting" was just a lie to damp criticism.
Merging a complete rewrite in another language in 9 days seems insane to me. Maybe I'm just too cautious but with something like this I'd split off as a separate binary and get some heavy use customers involved as testers first to see if it causes any unforeseen problems before slowly expanding it out.
I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.
goyozi 1 hours ago [-]
I don’t think you’re too cautious. Big upgrades and rewrites is somewhat of a „work hobby” of mine and this seems waaay too fast. I don’t know how the Bun canary process works and I guess their test suite is better than typical projects but still… I can’t imagine this working out well without testing it on a variety of big projects for a significant amount of time.
There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.
That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:
progbits 1 hours ago [-]
> too cautious
No, you are perfectly normal.
The people who in one week decided to replace the whole codebase for a widely used tool with code no human has seen are the crazy ones.
franciscop 2 hours ago [-]
It seems it was an experiment at that moment, and that it went well? I do hope they release it under 2.x though, cannot imagine how a 1M LoC can break in so many ways, especially if what xiphias says is true:
> It seems it was an experiment at that moment, and that it went well?
There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.
Either the “experiment” claim was a lie or they are being irresponsible.
randypewick 2 hours ago [-]
The experiment might have turned out well, or the author might have spent enough time to bring it to a place they was comfortable.
Frustration moves mountains, I don't think this rewrite was done lightly.
keyle 1 hours ago [-]
I'm no believer... 9 days later... Lessssssgoooooooo wooooooooo <sunglasses and rave>
mapcars 1 hours ago [-]
Well it was 9 days ago, at the time they were not confident, but maybe the results were insanely good.
impulser_ 2 hours ago [-]
"We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely."
jen20 2 hours ago [-]
People conflate “high chance of X” with “X will happen” all the time. See elections, for example.
tclancy 58 minutes ago [-]
> was just a lie to damp criticism.
Citation needed. Couldn't it just as easily have been one person being as suspicious of the task as everyone else seemed to be?
frangonf 1 hours ago [-]
So the geniuses in the datacenter prefer to rewrite the full codebase in another language instead of maintaining and improving its own fork or contributing to make the current language better.
Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...
q3k 52 minutes ago [-]
No matter how I look at this, it's churn for the sake of churn.
Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.
At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.
simonklee 45 minutes ago [-]
Say what you want, but for people building products on Bun, this is bad news for the foreseeable future.
arealaccount 17 minutes ago [-]
I guess it’s time to have Claude rewrite my Bun app in Deno
sensanaty 56 minutes ago [-]
Love seeing the tests themselves getting modified, with random `sleep(1)` thrown around in a few of them. This bodes well, I pray some idiot at some large AI co actually ends up using this garbage in prod
eqvinox 2 hours ago [-]
If this goes wrong even in the slightest, the ridicule about a drug dealer getting high on their own supply will be neverending and grim.
teterphiel 51 minutes ago [-]
not enough people are emotionally prepared for if it’s not going wrong even in the slightest
janice1999 13 minutes ago [-]
It's going to work for the most part. Most people know that. It's a file by file, mostly function by function, conversion from one low level language to another with a very large test suite (with lots of Rust unsafe to work around differences). I've done that for C tools and it's fine, with some obscure edge cases here and there. The challenges are going to be making the new, very ugly, alien codebase idiomatic Rust in future and adding features or debugging the complex issues. I wish the developers luck. They're in for a slog.
debugnik 43 minutes ago [-]
Having seen some of the diffs, it's already going wrong in my view.
adityashankar 25 minutes ago [-]
Curious can you elaborate on this?
perching_aix 2 hours ago [-]
PR so thick, the page failed to load the first time I opened it, and the comments still continue to fail to load. Absolutely hilarious. Though that may be just GitHub having a normal one, hard to tell these days.
1 009 257 lines added
4024 lines removed
6755 commits
2188 files touched
I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.
chrysoprace 23 minutes ago [-]
Not sure there is much of a point in reviewing a port of this size. It has >1000 instances of `unsafe` and uses the same patterns as the zig code according to Jarred. It feels like a vibe-ported version of what the TypeScript team are doing porting from TypeScript > Go with codemods.
9999gold 39 minutes ago [-]
Wondering what they will do when rust rejects a pr from them.
TeriyakiBomb 1 hours ago [-]
I hope the Deno lot take the opportunity to capitalise on this
janice1999 40 minutes ago [-]
Has he estimated the token cost for this (if he had to pay that is)? I'm curious how much this would cost a paying customer.
nDRDY 2 hours ago [-]
Why didn't they ask Claude to remove all of the `unsafe` at the same time??
feverzsj 1 hours ago [-]
How they gonna do refactoring, bugfix or other maintenance on generated code? Ask LLM?
pixel_popping 1 hours ago [-]
Yes, only LLMs from now on.
HEX4AGON 44 minutes ago [-]
I'm curious how much dollar in LLMs this rewrite cost
bharxhav 1 hours ago [-]
This canary will never leave the mine. (unless Anthropic opens their wallet again)
ninjahawk1 2 hours ago [-]
“+1,000,000” changes in a single commit is insane.
ahepp 1 hours ago [-]
The really interesting thing to do would be to ask the agents to submit the diff as a coherent patchset...
dluan 1 hours ago [-]
> "The codebase is otherwise largely the same. The same architecture, the same data structures."
ihateolives 1 hours ago [-]
Ship of Theseus.
40 minutes ago [-]
paulddraper 2 hours ago [-]
And 6700 commits.
eqvinox 2 hours ago [-]
No wonder GitHub is down
/s
mapcars 1 hours ago [-]
>No async rust.
I wonder why does that deserve an explicit statement? Is there anything wrong with async rust?
andai 1 hours ago [-]
I don't know their reasoning (not much Rust) but this was on the front page last week:
It's not that weird to end up with this when translating C/Zig/C++ to Rust. A first pass can use unsafe and then when the code is in Rust you can work on reducing the unsafe.
Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.
q3k 39 minutes ago [-]
> would be making too big of a change in the process of rewriting
God forbid the already unreviewable -710kloc/+1mloc change get any bigger!
K0nserv 36 minutes ago [-]
Sure, but that's kind of orthogonal. Imagine doing this by hand I still think going like-for-like with the Zig, even if that means a lot of unsafe, is a good approach.
But I suppose if you are already using LLMs it's more reasonable to try and go from Zig straight to Rust with no/minimal unsafe.
janice1999 44 minutes ago [-]
The benefit of using Rust is that you know exactly where the unsafe code is so you can handle it explicitly and deliberately to avoid issues by imposing carefully crafted constraints... oh.
PudgePacket 3 hours ago [-]
+1,009,257 -4,024
wild
andrepd 2 hours ago [-]
Least unstable js project
lucasloisp 2 hours ago [-]
The follow-up PR removing the zig source files being auto-tagged by bun's own CI as "ai slop" is so funny
That was me - not CI marking as slop. It kept around 60 .zig files around that should’ve been moved to .rs files.
wateralien 1 hours ago [-]
Deno's approach from the beginning seems to have proven out.
classicposter 58 minutes ago [-]
It's interesting that the developer who spearheaded the hype of Zig abandoned the engineering without addressing the segfault.
They could have also taken the approach of gradually porting from Zig to Rust via FFI.
Yes, this is a slop show by the AI lab.
sutib 1 hours ago [-]
"And Icarus laughed as he fell, for he knew to fall means to once have soared"
3 hours ago [-]
2 hours ago [-]
minikomi 1 hours ago [-]
Now translate it into zig!
aiscoming 1 hours ago [-]
vibe coders keep saying that now you can have 100x productivity, that you can write a million lines of code in a week and do what would take a team of 10 experienced developers a year.
where are all these million lines vibe coded projects? I don't see them. its all hype
pixel_popping 1 hours ago [-]
Bun is now literally vibe-coded, that's your proof. And Bun developers will solely use LLMs at some point (pretty close to "vibe coding").
q3k 1 hours ago [-]
Show me some gold instead of a continuous stream of pickaxes.
andai 1 hours ago [-]
This PR appears to be over a million lines (though GitHub won't load for me).
Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.
At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.
ares623 1 hours ago [-]
Anyone using Bun in production excited for this release? (other than Anthropic of course)
jwpapi 2 hours ago [-]
This will go down in history as the biggest mistake of software engineering of all time.
Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.
ageitgey 2 hours ago [-]
I don't know, there's been some pretty bad software mistakes, possibly bigger than a PR to convert an app to Rust:
I hope no one ever builds (or even worse, vibe codes) a radiation treatment machine.
nicce 2 hours ago [-]
Maybe this is the best marketing trick for Claude Code ever. Maybe there was pressure from Anthropic to do this and prove the value. Even partial success is enough to prove the value, justify the value and usage, and AI dependency even further.
tarruda 1 hours ago [-]
And as long as Bun doesn't break Claude code, which only uses a subset of it's APIs, this might just pay out.
1 hours ago [-]
ares623 1 hours ago [-]
It only needs to survive long enough for the IPO
applfanboysbgon 2 hours ago [-]
Claude Code itself is purely vibecoded, both CC and Bun leads are saying that humans are not writing code at Anthropic anymore. It is amazing how much money they intend to squander, because it's all funny money to them, investors just give it to them hand over fist for them to burn. Developing wrappers around the model isn't even the hard part and yet they're going to burn themselves to the ground getting high on their own supply.
NitpickLawyer 1 hours ago [-]
> Claude Code itself is purely vibecoded [...] money they intend to squander [...] going to burn themselves to the ground getting high on their own supply.
This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.
People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?
applfanboysbgon 1 hours ago [-]
2B+ in revenue on hundreds of billions in investments and future commitments is completely worthless. Anybody can turn $100b into $2b, that's not a fucking accomplishment. And to the extent that something is driving any revenue, it is the model, not the TUI. Any success Claude is having is despite the godawful TUI, not because of it.
robryan 32 minutes ago [-]
They appear to be lining up a funding round at a $900 billion dollar valuation. Or to be more conservative they already raised at $380 billion. A long way from worthless.
NitpickLawyer 46 minutes ago [-]
claude.ai (their chatgpt equivalent) was nowhere before cc came about. CC was coded in a few weeks by people, then a few months by people + cc, then mostly cc take the wheel. It is without a doubt the main reason why they're successful. It is also the main reason why their coding models are as good as they are. They've incorporated the early data into their training recipes, and evolved model + harness together.
mapcars 1 hours ago [-]
On the other hand they might be super confident in the results, and if it goes well they might use is as an example of how good claude is
pixel_popping 1 hours ago [-]
Well, realistically as well, humans gave us softwares that are full of security holes (and bugs), which one have you seen that a human perfected on the first time around? Give AI some time as well to be fair.
keyle 1 hours ago [-]
Won't touch it with a ten foot pole.
IshKebab 1 hours ago [-]
My initial reaction was that this is pure insanity but in fairness this is a fairly 1:1 port of existing code, so the developer's mental model of it should still match fairly well.
I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.
Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.
jwpapi 1 hours ago [-]
It’s still 400k more lines
dheatov 2 hours ago [-]
I for one am REALLY GLAD to see it consumes itself.
jwpapi 1 hours ago [-]
How life feels not using bun
dmitrijbelikov 1 hours ago [-]
Rust, Zig and TS went into a bun... /s
maipen 57 minutes ago [-]
HN overreacting again.
I trust Jarred to make the right decisions regarding bun, which seems to be his passion.
Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.
Anything bad that comes from this, will simply be fixed.
I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages
I can think of a few.
Rendered at 11:28:23 GMT+0000 (Coordinated Universal Time) with Vercel.
For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.
I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.
The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.
Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.
GPT/Codex is more honest in this regard.
Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.
https://github.com/oven-sh/bun/pull/30412/changes/68a34bf8ed...
This is great! Just add a random sleep(1) to a test, don't worry about it, it's going to be fine!
Strange test though either way.
Wow, This is definitely quite something for sure.
Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.
I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.
I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.
Still, those banks / enterprises won't appreciate the number of unit test changes.
And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.
https://news.ycombinator.com/item?id=48019226
I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.
There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.
That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:
No, you are perfectly normal.
The people who in one week decided to replace the whole codebase for a widely used tool with code no human has seen are the crazy ones.
https://news.ycombinator.com/item?id=48132902
There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.
Either the “experiment” claim was a lie or they are being irresponsible.
Frustration moves mountains, I don't think this rewrite was done lightly.
Citation needed. Couldn't it just as easily have been one person being as suspicious of the task as everyone else seemed to be?
Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...
Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.
At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.
1 009 257 lines added
4024 lines removed
6755 commits
2188 files touched
I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.
/s
I wonder why does that deserve an explicit statement? Is there anything wrong with async rust?
Async Rust never left the MVP state
https://news.ycombinator.com/item?id=48019163
Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.
God forbid the already unreviewable -710kloc/+1mloc change get any bigger!
But I suppose if you are already using LLMs it's more reasonable to try and go from Zig straight to Rust with no/minimal unsafe.
wild
https://github.com/oven-sh/bun/pull/30680
where are all these million lines vibe coded projects? I don't see them. its all hype
Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.
At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.
Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.
https://en.wikipedia.org/wiki/Therac-25
This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.
People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?
For instance look at this Zig function: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...
Versus this Rust version: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...
I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.
Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.
I trust Jarred to make the right decisions regarding bun, which seems to be his passion. Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.
Anything bad that comes from this, will simply be fixed.
I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages
I can think of a few.