> If writing the code is the easy part, why would I want someone else to write it?
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need composed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
wiseowise 1 minutes ago [-]
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution.
The thing is, my shitty AI issue was providing value.
Seems like shitty AI issue did more harm than good?
anileated 19 minutes ago [-]
"Just show me the prompt."
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
HPsquared 1 minutes ago [-]
The context windows before a prompt is often large and contains all sorts of information though, it wouldn't be just a prompt in isolation.
andai 23 minutes ago [-]
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
andai 29 minutes ago [-]
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
smusamashah 5 minutes ago [-]
We need a chrome extension like SponsorBlock, which publicly tags slop contributors. Maintainers can just reject PRs from those users.
Rendered at 08:58:51 GMT+0000 (Coordinated Universal Time) with Vercel.
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need composed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Seems like shitty AI issue did more harm than good?
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.