NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Zig project's rationale for their anti-AI contribution policy (simonwillison.net)
branko_d 2 hours ago [-]
From https://kristoff.it/blog/contributor-poker-and-ai/:

"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."

feverzsj 2 hours ago [-]
Pretty much sums up the LLM fanbase.
discreteevent 44 minutes ago [-]
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
kay_o 22 minutes ago [-]
> However, there are lots of people in the world who live their whole life by vibing

Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.

automatic6131 7 minutes ago [-]
> Why do they think they are "helping" with hallucinated rubbish that can't even build?

Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.

drchickensalad 7 minutes ago [-]
You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.
ramon156 9 minutes ago [-]
It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option
hitekker 5 hours ago [-]
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.

adrian_b 2 hours ago [-]
Yes, that reply provides convincing arguments for not merging the Bun fork, as it interferes with Zig's own roadmap for achieving even better results, while continuing to improve the whole language.
kunley 2 hours ago [-]
Not only this, but also:

Bun's fork will exhibit indeterministic behavior.

bonzini 5 hours ago [-]
A single PR for a 3000-line addition would, in all likelihood, be rejected anyway.
jeffmess 4 hours ago [-]
omnimus 3 hours ago [-]
When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.

3000 line LLM commit is not that.

defmacr0 44 minutes ago [-]
Also 95% of those 30k lines changed are fully self-contained inside of the aarch64 directory and of the remaining changes it looks like the majority is just adding "aarch64" as another item into an existing list. There are a few core changes that to me look like they could be done in their own PRs, but also core maintainers get to decide if they want to apply bureaucracy to their own work.
thomascountz 3 hours ago [-]
No description provided. I love this PR. But yeah, try being anyone besides Jacob and submitting that!
KronisLV 3 hours ago [-]
> In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.

I feel like if their goal is to prioritize contributors over contributions, it'd also logically follow that they should try to have descriptions where possible? Just to make exploring any set of changes and learning easier? Looked it over briefly, no Markdown or similar doc changes there either.

I mean the changes can be amazing, it's just that adding some description of what they are in more detail, alongside the considerations during development, for new folks or anyone wanting to learn from good code would also be due diligence.

vga1 3 hours ago [-]
How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?

edit Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.

vurudlxtyt 3 hours ago [-]
A personal relationship and trust, as seems to be the case here?
3 hours ago [-]
saagarjha 3 hours ago [-]
Read it?
3 hours ago [-]
wiseowise 3 hours ago [-]
By using my brain.
dnikolovv 2 hours ago [-]
Don't be ridiculous! We don't do that anymore.
IshKebab 3 hours ago [-]
It's still fairly obvious just by skimming the code. The best AI models are still quite far from the best human developers in ability and especially in code quality.
matwood 2 hours ago [-]
When the best AI models are the same or better than the best[1] human developers, what then?

We're already at the point talking about best vs. best.

IshKebab 2 hours ago [-]
If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.

We definitely are not close to that point though and it's unclear if/when we will get there.

vga1 5 minutes ago [-]
It seems to me that people might be arguing with conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.

If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it. If I do the former, my PR will be better without anybody noticing how it got better.

Blanket banning all of these seems like a bad idea to me, but I do not presume to have any say on Zig project's decisions. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.

irishcoffee 2 hours ago [-]
How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.
vga1 8 minutes ago [-]
Because LLM models are obviously much more than the sum of their parts.
flohofwoe 3 hours ago [-]
Jacob is part of the core team, not a random outside contributor.
slekker 3 hours ago [-]
Very different context: that PR is from a maintainer, and trusted member of Zig, which surely discussed the implementation/design internally as well
daishi55 4 hours ago [-]
What’s the point in debating the PR quality? The policy explicitly forbids all LLM code, so that policy is of course the “real reason”.
lelanthran 3 hours ago [-]
> What’s the point in debating the PR quality?

Because the pro-group are whining that the policy is preventing the merge, when in actual fact even if the policy did not exist, the PR is crap anyway.

Aeolun 3 hours ago [-]
Of course the policy is preventing the merge. That’s literally the point of the policy…
lelanthran 3 hours ago [-]
> Of course the policy is preventing the merge. That’s literally the point of the policy…

In this case it isn't the blocker - the fact that the dev took the time to read the PR in detail, comment on it, and provide reasons why it could not be merged makes it very clear to me that the policy wasn't the blocker.

If they were going to enforce the policy for this PR, they wouldn't have bothered to read it. The only reason to read it is to see if the policy is waived for this specific PR.

baq 3 hours ago [-]
OTOH why bother to polish the PR if it won't get accepted anyway?
lelanthran 3 hours ago [-]
> OTOH why bother to polish the PR if it won't get accepted anyway?

As the Zig maintainer so patiently explained, no amount of "polish" can fix the PR because it is misaligned to the correctness that they require.

IOW, that PR is so far off the reservation, unless it is completely rewritten, it won't be accepted.

baq 50 minutes ago [-]
it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?
lelanthran 41 minutes ago [-]
> it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?

In this case it looks like the answer is "Yes"; the PR was not dismissed immediately, it was first examined in great detail!

Why would the maintainer expend effort on something that was going to be rejected anyway?

baq 29 minutes ago [-]
because the policy is clearly 'reject' and yet significant time has been spent - either effort was wasted or policy is at best 'not implemented'.
lelanthran 18 minutes ago [-]
> either effort was wasted or policy is at best 'not implemented'.

I don't understand this PoV - have you ever come across a policy in any environment that wasn't subject to case-by-case exceptions?

Even in highly regulated environments (banking/fintech, Insurance, Medical, etc), policies are subject to exceptions and exemptions, done on a case-by-case basis.

The notion, in this specific case, that "well they rejected it because of policy" is clearly nonsense and I don't understand why people are pushing this so hard when the explanation of why an exemption can't be made for this specific PR is public, accessible and, I feel, already public knowledge.

raincole 1 hours ago [-]
Because it's Bun. Which is practically the use case testimonial of Zig.
lccerina 2 hours ago [-]
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."

A healthy contributor community is more important than mere code performance, quantity of features or lines of code, etc..

[1] https://zguide.zeromq.org/docs/chapter6

jart 6 hours ago [-]
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.

simonw 5 hours ago [-]
> Why use someone's project when you can just have the robot write your own?

I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.

earleybird 3 hours ago [-]
Depth of use over the lifetime of an app is a quality all its own that often not appreciated. A recurring pattern at $dayjob is that a new manager or director will join a business unit and declare an existing app as the worst terrible, no good, horrible app they've seen and they're going to fix that. A year and a half later the new app is finally delivered with 80% of the original functionality and a fresh set of bugs. The new dev team sees the surface functionality but misses a lot of the hard earned nuance the old system accrued over time. This is a pattern that existed long before LLMs.
mormegil 3 hours ago [-]
ZeelRajodiya 2 hours ago [-]
Good read!
einpoklum 2 minutes ago [-]
> an LLM can spit those out in a few minutes.

It may be able to spit out text that purports to be that, in a few minutes. But for most software, an LLM will not be able to spit out robust tests - let alone useful documentation. (And documentation which just replicates the parameter names and types is thorough...ly useless.)

tovej 3 hours ago [-]
An LLM most definitely cannot spit out robust tests or thorough documentation. It can spit out some tests or some documentation, but they will not cover the user perspective or edge cases unless those are already documented somewhere. That's verified by both experience and just thinking about it for two seconds.

The sanding down you refer to is what generates those tests and documentation.

anp 4 hours ago [-]
I feel similarly but IIUC I think that doesn’t strictly require an open source development model. I’ve benefited a huge amount from consuming and contributing to open source projects and I’m a bit worried that the “unit economics” changing might break some of the social dynamics upon which the ecosystem is built.
porridgeraisin 4 hours ago [-]
Yep. I realised the same. No one reads docs, or goes through tests. Either ways it's easy to write useless tests. And easy to write useless docs. Idt most even read the code. Now the difference is that it has become possible to write useless code.

So it's just the fact that others have already gone through the motions before I did. That's it really. I suppose in commercial settings, this is even more true and perhaps extends to compliance.

matkoniecz 3 hours ago [-]
> No one reads docs, or goes through tests.

I regularly do both when trying to use library, especially unfamiliar to me.

alex1sa 4 hours ago [-]
[dead]
jart 4 hours ago [-]
I value software that reveals knowledge. The frontier LLMs were trained on all the code that institutions had been keeping to themselves. So they're revealing programing know-how on a scale that just wasn't possible with open source. LLMs are the ultimate Prometheus. Information is more accessible and useful now than it's ever been.
wiseowise 3 hours ago [-]
> The frontier LLMs were trained on all the code that institutions had been keeping to themselves.

Lolz! I haven’t encountered “code that institutions had been keeping to themselves” that got even remotely close to OSS in quality.

4 minutes ago [-]
Antibabelic 4 hours ago [-]
I promise you, "the code that institutions had been keeping to themselves" is not nearly as special or good as you are implying here.
adrian_b 2 hours ago [-]
True.

I have worked during several decades in many companies, located in many countries, in a few continents, from startups to some of the biggest companies in their fields. Therefore I have seen many proprietary programs.

On average, proprietary programs are not better than open-source programs, but usually worse, because they are reviewed by fewer people and because frequently the programmers who write them may be stressed by having to meet unrealistic timelines for the projects.

The proprietary programs have greater quantity, not quality, by being written by a greater number of programmers working full-time on them, while much work on open-source projects is done in spare time by people occupied with something else.

Many proprietary programs can do things which cannot be done by open-source programs, but only because of access to documentation that is kept secret in the hope of preventing competition.

While lawyers, and other people who do not understand how research and development is really done, put a lot of weight in the so-called "intellectual property" of a company, which they believe to be embodied in things like the source code of proprietary programs or the design files for some hardware, the reality is that I have nowhere seen anything of substantial value in this so-called IP. Everywhere, what was really valuable in the know-how of the company was not the final implementation that could be read in some source code, but the knowledge about the many other solutions that had been tried before and they worked worse or not at all. This knowledge was too frequently not written down in any documentation. Knowing which are the dead ends is a great productivity boost for an experienced team, because any recent graduate could list many alternative ways of solving a problem, but most of them would not be the right choice in certain specific circumstances.

applfanboysbgon 3 hours ago [-]
The claim is also just categorically untrue. The largest source of training data by far is publicly available code on e.g. Github, so it mostly just gives you a way to recycle already-available code, without crediting the author, while allowing you to pretend you own it.
jart 3 hours ago [-]
So you're both saying all the alpha in Claude comes from open source devs like me? Even when I'm wrong I'm right.
chromacity 4 hours ago [-]
I remember hearing the same arguments in the early 2010s, when the "3D printing revolution" was just around the corner. Why would anyone buy anything anymore if you can download a model and print it in the privacy of your home? And make it infinitely customizable?

The whole point of having a civilization is that most things in life can be made someone else's problem and you can focus on doing one thing well. If I'm a dentist or if I run a muffler shop, there are only so many hours in a day, so I'd probably rather pay a SaaS vendor than learn vibecoding and then be stuck supervising a weird, high-maintenance underling that may or may not build me the app with the features I need (and that I might not be able to articulate clearly). There are exceptions, but they're just that, exceptions. If a vendor is reasonable and makes a competent product, I'll gladly pay.

The same goes for open source... even if an LLM could reliably create a brand new operating system from scratch, would I really want it to? I don't want to maintain an OS. I don't want to be in charge of someone who maintains an OS. I don't necessarily trust myself to have a coherent vision for an OS in the first place!

gausswho 5 hours ago [-]
That only holds true for the smallest tier of open source projects. Past a certain point of complexity, it's unlikely you can expect the robot to read your mind well enough to provide something of high quality and 'outstanding for just you'.

The Zig project is certainly far beyond such capability.

jart 3 hours ago [-]
You have to push the robot to be as fanatical as you are. It holds so much back, always aiming to do the simple normal thing that most people do, rather than the top-notch stuff it knows.
8n4vidtmkvmk 4 hours ago [-]
I'm finding this out the hard way. I set out to build a 1 page app. I thought it would take a day. It's 98% vibe coded at this point. Even with AI implementing everything, its taken several weekends and many evenings. And not because AI is doing a bad job its just that as i see it come together, i have more and more feature requests. I've got a couple dozen left but I can't just let the AI chew through them all at once. Im effectively QA now. Have to make sure everything is just right.
skeledrew 4 hours ago [-]
LLM access is not yet universally available. There are those who can't exactly afford it. And there are also those with access but there are occasional or perennial issues, like Claude outages and general degraded performance over time. For example couple of months ago when I just started using Claude, I was easily making good progress on multiple projects within a week. Nowadays I'm hardly getting through much of anything as most of the time Claude is just showing spinners, and it also feels like the code quality has taken a nosedive.
jillesvangurp 4 hours ago [-]
I've been seeing a drop in PRs against my repositories. I have a couple of repositories with around a hundred stars. Nothing spectacular but they were getting occasional PRs until last year. This year I've had almost none so far. My theory is that LLMs prefer sticking to mainstream projects. And since lots of developers are now leaning heavily on LLMs, they are biased to ignoring most of what I provide.

And you indeed get a lot of wheel reinvention by LLMs because that is now cheap to do. So rather than using some obscure thing on Github (like my stuff), it's easier to just generate what you need. I've noticed this with my own choices in dependencies as well. I tend to just go with what the LLM suggests unless I have a very good reason not to.

bee_rider 5 hours ago [-]
Most people don’t have the ability to read code well enough to determine if an LLM output is good or not. And most people don’t have subscriptions to models that can develop non-trivial programs…

Maybe this will be a real problem in a couple years though.

dawnerd 4 hours ago [-]
Code aside, most people don't even know how to describe what they actually want it to do, and LLMs are still a loooong way away from mind reading. I've seen developers struggle to even write down what they want. Simple demos like they love to show off with snake-like games are fun and all but they're nothing like the complex opensource apps everyone seems to think we'll just generate with a simple prompt.
vga1 3 hours ago [-]
I think this ignores the amount of work needed to make LLM contributions be of high quality. It's much less work than making pure human contribution, but it's definitely not zero.

So centralizing that common work is a benefit of open-source just as much with LLMs as it was before.

LeCompteSftware 4 hours ago [-]
>> Whereas earlier you had to use something that was mass produced to be satisfactory for everyone

As someone who recently started using OpenSCAD for a project I find this attitude quite irritating. You certainly did not "have to" use popular tools.

The OpenSCAD example is particularly illuminating because it's fussy and frustrating and clearly tuned towards a few specific maintainers; there's a ton of things I'd like changed. But I would never trust an LLM to do it! "Oh the output looks fine, cool" is not enough for a CAD program. "Oh, there are a lot of tests, cool" great, I have no idea what a thorough CAD test suite looks like. I would be a reckless idiot if I asked Claude to make me a custom SCAD program... unless I put in a counterproductive amount of work. So I'm fine with OpenSCAD.

I am also sincerely baffled as to how this stimulates the "labor economy." The most obvious objection is that Anthropic seems to be the only party here getting any form of economic benefit: the open-source maintainers are just plain screwed unless they compromise quality for productivity, and the LLM users are trading high-quality tooling built by people who understand the problem for shitty tooling built by a robot, in exchange for uncompensated labor. It only stimulates the "labor economy" in a Bizarro Keynesian sense, digging up glass bottles that someone forgot to put the money in.

I have seen at least 4 completely busted vibe-coded Rust SQLite clones in the last three months, happily used by people who think they don't need to worry their pretty little heads with routine matters like database design. It's a solved problem and Claude is on the case! In fact unlike those stooopid human SQLIte developers, Claude made it multithreaded! So fucking depressing.

FeepingCreature 3 hours ago [-]
This is funny because I was in the same situation, and actually used Claude to make a custom CAD program inspired by OpenSCAD :) https://fncad.github.io

You definitely need to have a strong sense of code design though. The AIs are not up to writing clean code at project scale on their own, yet.

LeCompteSftware 3 hours ago [-]
This is a good example of what I mean! fnCAD appears to be a significantly buggier and highly incomplete version of OpenSCAD, where AI essentially grabbed the low-hanging fruit - albeit an impressively large amount of fruit - and left you with the hard parts. I fail to see how this solved any problems. Maybe it was an experiment, which is fine. But it's not even close to a viable CAD product, even by OpenSCAD's scruffy FOSS standards, and there's no feasible way to get it there without a ton of human work.

Not trying to denigrate the work here, as such. But this certainly didn't convince me that using AI to replace OpenSCAD (or any other major open-source project) is a good idea. The LLMs still aren't even close to being able to pull it off.

jart 3 hours ago [-]
Anthropic will probably do what Google did in the 2000s, which is give jobs to all the open source developers whose work helped them get there.

Civilization isn't monotonic. People keep solving the same problems over and over again, telling the same stories with a different twist. For example in 1964 having a GUI work environment with a light pen as your mouse was a solved problem on IBM System/360. They had tools similar to CAD. So why don't we all just use that rather than make the same mistakes again. Each time a new way of doing things comes out, people get an opportunity to rewrite everything.

wiseowise 3 hours ago [-]
> Why use someone's project when you can just have the robot write your own?

Because it is incredibly expensive to write a replacement for semi-complex software? Good luck asking frontier models to write a replacement for Zig, Docker, VSCode, etc.

matkoniecz 3 hours ago [-]
> Why use someone's project when you can just have the robot write your own?

Iff it is doable, then it would be worth considering it as alternative.

> It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.

not sure what you mean by that

julenx 3 hours ago [-]
The article explains Zig's stance in further detail, but the quoted part on its own caught my attention because my reading of it is rather "pro human communication" instead of "anti-AI".
kennykartman 33 minutes ago [-]
They're banning all AI though, so it looks pretty much anti-AI to me.
KronisLV 3 hours ago [-]
> If a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

That's a fair thing to ask, though it seems like people will arrive at very different conclusions there.

jillesvangurp 4 hours ago [-]
It's a good rationale. But it points the finger at a real bottleneck in open source development: the burden of manually reviewing contributions. And the need to automate that with AI as well. Reviews were already becoming a problem before AI. Lots of projects have been dealing with a large influx of contributions from inexperienced developers from all over the world looking to boost their CVs by increasing their Github statistics. It's the same dynamic that destroyed Stackoverflow. Which, thanks to AI has been largely sidelined now. And now that AI is there, those same inexperienced developers are using that at scale to generate even more garbage contributions.

Doing manual reviews of everything is very labor intensive and not scalable. However, AIs are pretty good at doing code reviews and verifying adherence to guard rails, contributor guidelines, and other rules. It's not perfect, but it's an underused tool. Both by reviewers and contributors. If your contribution obviously doesn't comply with the guidelines, it should be rejected automatically. The word "obviously" here translates into "easy to detect with some AI system".

Projects should be using a lot of scrutiny for contributions by new contributors. And most of that scrutiny should be automated. They should reserve their attention for things that make it past automated checks for contribution quality, contributor reputability, adherence to whatever rules are in place, etc. Reputability is a good way to ensure that contributions from reputable sources get priority. If your reputation is not great, you should expect more scrutiny and a lower priority.

lugu 3 hours ago [-]
I don't know Zig, but I think that is not the problem here. Not exactly. The real question is: why spending all those efforts to grow and align a pool of contributors if contributions are cheap and correct? Code review is not just about checking if what it says it does, and if it does it according to the guidelines. The review is a touch point to discuss where the project is heading and how to get there. That is the most important part in the long run. As a collective human effort, it needs coordination. Some of it is via the review process (especially for those not part if the core team that draft the roadmap). One could document all those micro decisions with the rational, but it might end up be a wakamole game. IMO, projects which allow AI usage need to spend way more effort in coordination (and quality insurance).
lelanthran 47 minutes ago [-]
> The real question is: why spending all those efforts to grow and align a pool of contributors if contributions are cheap and correct?

Until the contributions are cheap and correct, you need valuable contributors more than you need the contributions.

You point would be valid when we get to a point of contributions all being both correct and cheap. Right now they are only cheap.

f311a 2 hours ago [-]
You still have to review everything manually again anyway. It's a compiler for a language, bugs and bad architecture decisions cost a lot. They moved to codeberg, so there are less garbage PRs now. They try to grow a culture where you expected to deliver good code in the PRs so the review takes less time.

It takes like 5 minutes to spot garbage PRs manually. LLM can flood you with a wall of text where only half of the stuff make sense. Also, they can't really spot bad architecture. It's a compiler in an unpopular language, don't forget that.

emj 3 hours ago [-]
> [you can] stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project

The real bottle neck when you want to grow is connecting with the right people. An LLM is not helping with that if you want to build a community. When you use LLM to skip the need to understand a problem how are you ever going to get a reputation that I can trust?

The post is not about reputation it about seeing how people respond and work with you in a community.

EDIT: I see that you frame it as a help and a tool and sure it might work, but I feel like it is just another obstacle.

baq 3 hours ago [-]
> why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

perhaps that's what the maintainers should be doing after all. it still takes time and tokens, though; neither is free.

I'd personally rather have the maintainers spend the time writing as much docs and specs as possible so the future LLMs have strong guardrails. zig's policy will be completely outdated in a couple years, for better or worse. someone will take bun's fork, add a codegen improvement here, add a linker improvement there and suddenly you'll have a better, faster zig outside of zig.

aflag 2 hours ago [-]
If it gets outdated they can review their policy. Right now it is sensible. We're at early ages of this type of AI and we don't know what the end game will be.

Someone forking it and makeing it better with AI is a possibility. If that happens will know it was better for the project for the maintainers to just review the code. If that happens, they can probably become maintainers in the fork. Or maybe they don't like that work and could just go do something else

felipeerias 4 hours ago [-]
The other side of this is that open source projects that allow AI tools will be more restrictive towards new contributors.

This already happens to some degree on large software projects with corporate backing (Web engines, compilers, etc.), where it is often not trivial to start contributing as an independent individual.

Reasonable people can disagree on whether one approach is inherently better than the other, as ultimately they seem to be optimising for different goals.

throwjd848rjr 3 hours ago [-]
Imagine getting contributions from someone, who has no access to build system and tests.

If I have a test harness, and LLM workflow setup, it is easier to just write new code myself. I am not giving away my "secret sauce". And I will not have a debate "why this simple feature needs 1000 new tests...", and two days just to make a full release build.

For merge I have to do 99% of work anyway (analyze, autotest, build, smoke, regression test). I usually merge smaller commits just to be polite (and not to look like one man show), but there is no way to accept large refactoring!

nicman23 4 hours ago [-]
yeah giving a llm git blame and git grep has saved me a lot of time of doing boring basically re.
dmitry_dv 3 hours ago [-]
[dead]
buggymcbugfix 5 hours ago [-]
One reason I love writing production code in Ur/Web is that LLMs are incapable of synthesising something even remotely resembling it. Keeps me on my toes.

I think this is a great policy by the Zig team.

wk_end 4 hours ago [-]
Ur/Web! That's something I haven't heard about in ages. Is it still in active development? In what circumstances are you using it? Fun, your own startup, is some secret big commercial user of it...?
lukaslalinsky 2 hours ago [-]
On multiple occasions over the last months, I have been wishing the Zig/ZSF team would use LLMs. I've found many copy&paste errors that simply wouldn't exist if mundane tasks were delegated to a good LLM. It's even in the Zig community, I've seen PRs to some projects I'm interested in boosting how it was all human made, and containing all kinds of trivial logical errors that even the worst LLM would catch.
grayhatter 23 minutes ago [-]
no cite?
lccerina 2 hours ago [-]
If you see them, why don't you help squash them?
lukaslalinsky 2 hours ago [-]
I did.
6 hours ago [-]
GaryBluto 2 hours ago [-]
I don't think I've ever heard anything positive about Zig. Every time I've seen the project mentioned is them using bizarre black and white moral judgements to justify stupid decisions.
lukaslalinsky 2 hours ago [-]
You need to look past this. Zig is an excellent low-level language. Thanks to the comptime features, you can have high-level looking APIs while staying down to the metal. It's not for everyone, obviously, but as a language, it is really good.
Pay08 2 hours ago [-]
You have to be wilfully blind, then. It gets rather frequently praised on HN (as much as any niche language can be), and they certainly don't make black-and-white moral judgements often.
trklausss 2 hours ago [-]
Honestly, that doesn't sound too bad. It does not say you can't use LLMs, it just doesn't let LLMs be the author of a commit. Meaning, if you as a developer make yourself responsible for what the LLM wrote, go ahead. But be ready to answer the technical questions, be ready to get grilled in the code review, and be called if you get a CVE on that part of the code...
shirro 3 hours ago [-]
People shouldn't have to justify not putting up with bullshit. It is a sensible default.
slopinthebag 4 hours ago [-]
Go zig! I don't use the language but I totally respect where they're coming from and their mission and ethics.

For those who are pissed because a large OSS project isn't accepting LLM generated slop: Fuck off!

slopinthebag 4 hours ago [-]
Very convenient of Mr. Willison to omit the fact that Bun's upstream changes are total garbage and would not be upstreamed regardless of any policies, omitting LLM generated code or not, since they are, as a zig core team member articulated in a classier way, shite.
throwa356262 3 hours ago [-]
Also, that zig team is already working on other approaches that are better and more stable than what Bun team did:

https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

000ooo000 3 hours ago [-]
Notable quotes:

>There’s the 4x speedup claimed by the Bun team, already available on Zig 0.16.0!

>Each [incremental] update is taking less than 0.4s, compared to the 120+ seconds taken to rebuild with LLVM. In other words, incremental updates are over 300 times faster on this codebase than fresh LLVM builds are. In comparison, an enhancement capped at a 4x improvement is pretty abysmal. [..] Again, this feature is available in Zig 0.16.0—you can use it!

feverzsj 5 hours ago [-]
No human should trust any bullshit made by bullshit machine.
2 hours ago [-]
techpulselab 2 hours ago [-]
[flagged]
njanne 2 hours ago [-]
[dead]
marlburrow 4 hours ago [-]
[flagged]
jwzxgo 7 hours ago [-]
[dead]
mapontosevenths 5 hours ago [-]
> unless it's coming from a known and trusted developer.

That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.

A general ban makes sense based on their rationalization ("contributor poker"[0]). A total and inflexible ban can lead to a worse outcome for everyone though.

If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

[0] https://kristoff.it/blog/contributor-poker-and-ai/

lelanthran 4 hours ago [-]
> That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.

No; they turned it down because the vibe-coded PR was crap.

> The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.

lmm 5 hours ago [-]
> If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

The flip side of that is that if such a contributor vouches for code that turns out to be poor-quality, this should severely damage their reputation. I've found far too many "senior" developers will give AI a pass on poor coding practices.

JoshTriplett 5 hours ago [-]
superb_dev 5 hours ago [-]
A standout paragraph from that thread:

> Put more simply, we are going to make these enhancements, but hacking them in for a flashy headline isn’t a good outcome for our users. Instead we’re approaching the problem with the care it deserves, so that when we ultimately ship it, we don’t cause regressions.

These exact changes are already on the roadmap and Bun’s PR is rushing ahead.

mapontosevenths 5 hours ago [-]
Thanks. That explains away most of my concern.
feverzsj 5 hours ago [-]
Quite the contrary, Bun's developers don't even understand language spec. Their slop didn't use the same type resolution semantics as Zig, which makes their implementation exhibits non-deterministic behavior.
CaptainFever 2 hours ago [-]
> LLM assistance breaks that completely. It doesn't matter if the LLM helps you submit a perfect PR to Zig - the time the Zig team spends reviewing your work does nothing to help them add new, confident, trustworthy contributors to their overall project.

The conclusion does not follow from the premises. They are assuming fully autonomous agents submitting PRs, not using LLMs as tools like Bun did. As always, the most ethical thing to do is to just ignore any anti-LLM policies and not disclose anything.

peter_griffin 2 hours ago [-]
>As always, the most ethical thing to do is to just ignore any anti-LLM policies and not disclose anything

How does this have anything to do with ethics? Its their project not yours, they can reject your PR for whatever reason, including you using LLMs for developing that PR. Also they're not assuming autonomous agents submitting PRs. They're saying that they do not accept PRs where any part of the thinking process was outsourced to a LLM.

Even if you disagree with their opinion, the ethical thing to do is to not interact and move on. Not to try to sneak in your LLM assisted PRs without the maintainers consent.

crabmusket 2 hours ago [-]
Can you elaborate on the ethics of expressly ignoring the wishes of the project ownership?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:40:27 GMT+0000 (Coordinated Universal Time) with Vercel.