NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
My AI Adoption Journey (mitchellh.com)
libraryofbabel 11 hours ago [-]
This is such a lovely balanced thoughtful refreshingly hype-free post to read. 2025 really was the year when things shifted and many first-rate developers (often previously AI skeptics, as Mitchell was) found the tools had actually got good enough that they could incorporate AI agents into their workflows.

It's a shame that AI coding tools have become such a polarizing issue among developers. I understand the reasons, but I wish there had been a smoother path to this future. The early LLMs like GPT-3 could sort of code enough for it to look like there was a lot of potential, and so there was a lot of hype to drum up investment and a lot of promises made that weren't really viable with the tech as it was then. This created a large number of AI skeptics (of whom I was one, for a while) and a whole bunch of cynicism and suspicion and resistance amongst a large swathe of developers. But could it have been different? It seems a lot of transformative new tech is fated to evolve this way. Early aircraft were extremely unreliable and dangerous and not yet worthy of the promises being made about them, but eventually with enough evolution and lessons learned we got the Douglas DC-3, and then in the end the 747.

If you're a developer who still doesn't believe that AI tools are useful, I would recommend you go read Mitchell's post, and give Claude Code a trial run like he did. Try and forget about the annoying hype and the vibe-coding influencers and the noise and just treat it like any new tool you might put through its paces. There are many important conversations about AI to be had, it has plenty of downsides, but a proper discussion begins with close engagement with the tools.

keyle 8 hours ago [-]
Architects went from drawing everything on paper, to using CAD products over a generation. That's a lot of years! They're still called architects.

Our tooling just had a refresh in less than 3 years and it leaves heads spinning. People are confused, fighting for or against it. Torn even between 2025 to 2026. I know I was.

People need a way to describe it from 'agentic coding' to 'vibe coding' to 'modern AI assisted stack'.

We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!

We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...

When was the last time you reviewed the machine code produced by a compiler? ...

The real issue this industry is facing, is the phenomenal speed of change. But what are we really doing? That's right, programming.

lelanthran 2 minutes ago [-]
> We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!

Maybe not, but we don't allow non-architects to vomit out thousands of diagrams that they cannot review, and that is never reviewed, which are subsequently used in the construction of the house.

Your analogy to s/ware is fatally and irredeemably flawed, because you are comparing the regulated and certification-heavy production of content, which is subsequently double-checked by certified professionals, with an unregulated and non-certified production of content which is never checked by any human.

atomicnumber3 5 hours ago [-]
"When was the last time you reviewed the machine code produced by a compiler?"

Compilers will produce working output given working input literally 100% of my time in my career. I've never personally found a compiler bug.

Meanwhile AI can't be trusted to give me a recipe for potato soup. That is to say, I would under no circumstances blindly follow the output of an LLM I asked to make soup. While I have, every day of my life, gladly sent all of the compiler output to the CPU without ever checking it.

The compiler metaphor is simply incorrect and people trying to say LLMs compile English into code insult compiler devs and English speakers alike.

LiamPowell 4 hours ago [-]
> Compilers will produce working output given working input literally 100% of my time in my career.

In my experience this isn't true. People just assume their code is wrong and mess with it until they inadvertently do something that works around the bug. I've personally reported 17 bugs in GCC over the last 2 years and there are currently 1241 open wrong-code bugs.

Here's an example of a simple to understand bug (not mine) in the C frontend that has existed since GCC 4.7: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105180

grey-area 1 hours ago [-]
These are still deterministic bugs, which is the point the OP was making. They can be found and solved once. Most of those bugs are simply not that important, so they never get attention.

LLMS on the other hand are non-deterministic and unpredictable and fuzzy by design. That makes them not ideal when trying to produce output which is provably correct - sure you can output and then laboriously check the output - some people find that useful, some are yet to find it useful.

It's a little like using Bitcoin to replace currencies - sure you can do that, but it includes design flaws which make it fundamentally unsuited to doing so. 10 years ago we had rabid defenders of these currencies telling us they would soon take over the global monetary system and replace it, nowadays, not so much.

rhubarbtree 1 hours ago [-]
This argument is disingenuous and distracts rather than addresses the point.

Yes, it is possible for a compiler to have a bug. No, that is I’m mo way analogous to AI producing buggy code.

I’ve experienced maybe two compiler bugs in my twenty year career. I have experienced countless AI mistakes - hundreds? Thousands? Already.

These are not the same and it has the whiff of sales patter trying to address objections. Please stop.

pcl 21 minutes ago [-]
”I've never personally found a compiler bug.”

I remember the time I spent hours debugging a feature that worked on Solaris and Windows but failed to produce the right results on SGI. Turns out the SGI C++ compiler silently ignored the `throw` keyword! Just didn’t emit an opcode at all! Or maybe it wrote a NOP.

All I’m saying is, compilers aren’t perfect.

I agree about determinism though. And I mitigate that concern by prompting AI assistants to write code that solves a problem, instead of just asking for a new and potentially different answer every time I execute the app.

andai 3 hours ago [-]
This is obviously besides the point but I did blindly follow a wiener schnitzel recipe ChatGPT made me and cooked for a whole crew. It turned out great. I think I got lucky though, the next day I absolutely massacred the pancakes.
D-Machine 2 hours ago [-]
I genuinely admire your courage and willingness (or perhaps just chaos energy) to attempt both wiener schnitzel and pancakes for a crew, based on AI recipes, despite clearly limited knowledge of either.
anematode 3 hours ago [-]
I'm trying to track down a GCC miscompilation right now ;)
keyle 3 minutes ago [-]
I feel for you :D
keyle 4 hours ago [-]
You're correct, and I believe this is only a matter of time. Over time it has been getting better and will keep doing so.
blks 47 minutes ago [-]
It won’t be deterministic.
bigstrat2003 3 hours ago [-]
Maybe. But it's been 3 years and it still isn't good enough to actually trust. That doesn't raise confidence that it will ever get there.
keyle 3 hours ago [-]
You need to put this revolution in scale with other revolutions.

How long did it take for horses to be super-seeded by cars?

How long did powertool take to become the norm for tradesmen?

This has gone unbelievably fast.

grey-area 1 hours ago [-]
I think things can only be called revolutions in hindsight - while they are going on it's hard to tell if they are a true revolution, an evolution or a dead-end. So I think it's a little premature to call Generative AI a revolution.

AI will get there and replace humans at many tasks, machine learning already has, I'm not completely sure that generative AI will be the route we take, it is certainly superficially convincing, but those three years have not in fact seen huge progress IMO - huge amounts of churn and marketing versions yes, but not huge amounts of concrete progress or upheaval. Lots of money has been spent for sure! It is telling for me that many of the real founders at OpenAI stepped away - and I don't think that's just Altman, they're skeptical of the current approach.

PS Superseded.

jen729w 3 hours ago [-]
> Meanwhile AI can't be trusted to give me a recipe for potato soup.

Because there isn’t a canonical recipe for potato soup.

Jensson 3 hours ago [-]
That is not the issue, any potato soup recipe would be fine, the issue is that it might fetch values from different recipes and give you an abomination.
D-Machine 2 hours ago [-]
This exactly, I cook as passion, and LLMs just routinely very clearly (weighted) "average" together different recipes to produce, in the worst case, disgusting monstrosities, or, in the best case, just a near-replica of some established site's recipe.
allworms 7 hours ago [-]
> We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!

> We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...

> When was the last time you reviewed the machine code produced by a compiler?

Sure, because those are categorically different. You are describing shortcuts of two classes: boilerplate (library of things) and (deterministic/intentional) automation. Vibe coding doesn't use either of those things. The LLM agents involved might use them, but the vibe coder doesn't.

Vibe coding is delegation, which is a completely different class of shortcut or "tool" use. If an architect delegates all their work to interns, directs outcomes based on whims not principals, and doesn't actually know what the interns are delivering, yeah, I think it would be fair to call them a vibe architect.

We didn't have that term before, so we usually just call those people "arrogant pricks" or "terrible bosses". I'm not super familiar but I feel like Steve Jobs was pretty famously that way - thus if he was an engineer, he was a vibe engineer. But don't let this last point detract from the message, which is that you're describing things which are not really even similar to vibe coding.

djhn 1 hours ago [-]
I think you are right in placing emphasis on delegation.

There’s been a hypothesis floating around that I find appealing. Seemingly you can identify two distinct groups of experienced engineers. Manager, delegator, or team lead style senior engineers are broadly pro-AI. The craftsman, wizard, artist, IC style senior engineers are broadly anti-AI.

But coming back to architects, or most professional services and academia to be honest, I do think the term vibe architect as you define it is exactly how the industry works. An underclass of underpaid interns and juniors do the work, hoping to climb higher and position themselves towards the top of the ponzi-like pyramid scheme.

rhubarbtree 1 hours ago [-]
Reasoning by analogy is usually a bad idea, and nowhere is this worse than talking about software development.

It’s just not analogous to architecture, or cooking, or engineering. Software development is just its own thing. So you can’t use analogy to get yourself anywhere with a hint of rigour.

The problem is, AI is generating code that may be buggy, insecure, and unmaintainable. We have as a community spent decades trying to avoid producing that kind of code. And now we are being told that productivity gains mean we should abandon those goals and accept poor quality, as evidenced by MoltBook’s security problems.

It’s a weird cognitive dissonance and it’s still not clear how this gets resolved.

AlotOfReading 6 hours ago [-]
Don't take this as criticizing LLMs as a whole, but architects also don't call themselves engineers. Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the "new" 1/5th. Our job spans both of these.

"Architect" is actually a whole career progression of people with different responsibilities. The bottom rung used to be the draftsmen, people usually without formal education who did the actual drawing. Then you had the juniors, mid-levels, seniors, principals, and partners who each oversaw different aspects. The architects with their name on the building were already issuing high level guidance before the transition instead of doing their own drawings.

    When was the last time you reviewed the machine code produced by a compiler?
Last week, to sanity check some code written by an LLM.
throwup238 6 hours ago [-]
> Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the "new" 1/5th. Our job spans both of these.

Where this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve.

That is an entirely different role from the army of civil, mechanical, and electrical engineers (some who are PEs and some who are not) who do most of the work for the principal engineer/designated engineer/engineer of record, that have to trust building codes and tools like FEA/FEM that then get final approval from the most senior PE. I don’t think the analogy works, as software engineers rarely report to that kind of hierarchy. Architects of Record on construction projects are usually licensed with their own licensing organization too, with layers of licensed and unlicensed people working for them.

AlotOfReading 6 hours ago [-]
That diversity of roles is what "among other things" was meant to convey. My job at least isn't terribly different, except that licensing doesn't exist and I don't get an actual stamp. My company (and possibly me depending on the facts of the situation) is simply liable if I do something egregious that results in someone being hurt.
blibble 6 hours ago [-]
> Where this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve.

there are plenty of software engineers that work in regulated industries, with individual licensing, criminal liability, and the ability to be struck off and banned from the industry by the regulator

... such as myself

bilbo0s 3 hours ago [-]
Sure.

But no one stops you from writing software again.

It's not that PE's can't design or review buildings in whatever city the egregious failure happened.

It's that PE's can't design or review buildings at all in any city after an egregious failure.

It's not that PE's can't design or review hospital building designs because one of their hospital designs went so egregiously sideways.

It's that PE's can't design or review any building for any use because their design went so egregiously sideways.

I work in an FDA regulated software area. I need 510k approval and the whole nine. But if I can't write regulated medical or dental software anymore, I just pay my fine and/or serve my punishment and go sling React/JS/web crap or become a TF/PyTorch monkey. No one stops me. Consequences for me messing up are far less severe than the consequences for a PE messing up. I can still write software because, in the end, I was never an "engineer" in that hard sense of the word.

Same is true of any software developer. Or any unlicensed area of "engineering" for that matter. We're only playing at being "engineers" with the proverbial "monopoly money". We lose? Well, no real biggie.

PE's agree to hang a sword of damocles over their own heads for the lifetime of the bridge or building they design. That's a whole different ball game.

moregrist 4 hours ago [-]
> When was the last time you reviewed the machine code produced by a compiler? ...

Any time I’m doing serious optimization or knee-deep in debugging something where the bug emerged at -O2 but not at -O0.

Sometimes just for fun to see what the compiler is doing in its optimization passes.

You severely limit what you can do and what you can learn if you never peek underneath.

blks 1 hours ago [-]
Compilers are deterministic.
datsci_est_2015 7 hours ago [-]
I skimmed over it, and didn’t find any discussion of:

  - Pull requests
  - Merge requests
  - Code review
I feel like I’m taking crazy pills. Are SWE supposed to move away from code review, one of the core activities for the profession? Code review is as fundamental for SWE as double entry is for accounting.

Yes, we know that functional code can get generated at incredible speeds. Yes, we know that apps and what not can be bootstrapped from nothing by “agentic coding”.

We need to read this code, right? How can I deliver code to my company without security and reliability guarantees that, at their core, come from me knowing what I’m delivering line-by-line?

bthornbury 6 hours ago [-]
Either really comprehensive tests (that you read) or read it. Usually i find you can skim most of it, but like in core sections like billing or something you gotta really review it. The models still make mistakes.
AloysB 6 hours ago [-]
Give it a read, he mentions briefly how he uses for PR triages and resolving GH issues.

He doesn't go in details, but there is a bit:

> Issue and PR triage/review. Agents are good at using gh (GitHub CLI), so I manually scripted a quick way to spin up a bunch in parallel to triage issues. I would NOT allow agents to respond, I just wanted reports the next day to try to guide me towards high value or low effort tasks.

> More specifically, I would start each day by taking the results of my prior night's triage agents, filter them manually to find the issues that an agent will almost certainly solve well, and then keep them going in the background (one at a time, not in parallel).

This is a short excerpt, this article is worth reading. Very grounded and balanced.

datsci_est_2015 4 hours ago [-]
Okay I think this somewhat answers my question. Is this individual a solo developer? “Triaging GitHub issues” sounds a bit like open source solo developer.

Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.

I remain unconvinced that agentic AI scales beyond solo development, where the individual is liable for the output of the agents. More precisely, I can use agentic AI to write my code, but at the end of the day when I submit it to my org it’s my responsibility to understand it, and guarantee (according to my personal expertise) its security and reliability.

Conversely, I would fire (read: reprimand) someone so fast if I found out they submitted code that created a vulnerability that they would have reasonably caught if they weren’t being reckless with code submission speed, LLM or not.

AI will not revolutionize SWE until it revolutionizes our processes. It will definitely speed us up (I have definitely become faster), but faster != revolution.

kaibee 3 hours ago [-]
> Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.

They probably aren't really. At least in orgs I worked at, writing the code wasn't usually the bottleneck. It was in retrospect, 'context' engineering, waiting for the decision to get made, making some change and finding it breaks some assumption that was being made elsewhere but wasn't in the ticket, waiting for other stakeholders to insert their piece of the context, waiting for $VENDOR to reply about why their service is/isn't doing X anymore, discovering that $VENDOR_A's stage environment (that your stage environment is testing against for the integration) does $Z when $VENDOR_B_C_D don't do that, etc.

The ecosystem as a whole has to shift for this to work.

djhn 1 hours ago [-]
The author of the blog made his name and fortune founding Hashicorp, makers of Vagrant and Terraform among other things. Having done all that in his twenties he retired as the CTO and reappeared after a short hiatus with a new open source terminal, Ghostty.
Quarrelsome 6 hours ago [-]
we're talking about _this_ post? He specifically said he only runs one agent, so sure he probably reviews the code or as he stated finds means of auto-verifying what the agent does (giving the agent a way to self-verify as part of its loop).
QuiEgo 4 hours ago [-]
You read it. You now have an infinite army of overconfident slightly drunken new college grads to throw at any problem.

Some times you’re gonna want to slowly back away from them and write things yourself. Sometimes you can farm out work to them.

Code review their work as you would any one else’s, in fact more so.

My rule of thumb has been it takes a senior engineer per every 4 new grads to mentor them and code review their work. Or put another way bringing on a new grad gets you +1 output at the cost of -0.25 a senior.

Also, there are some tasks you just can’t give new college grads.

Same dynamic seems to be shaping up here. Except the AI juniors are cheap and work 24*7 and (currently) have no hope of growing into seniors.

kaibee 3 hours ago [-]
> Same dynamic seems to be shaping up here. Except the AI juniors are cheap and work 24*7 and (currently) have no hope of growing into seniors.

Each individual trained model... sure. But otoh you can look at it as a very wide junior with "infinite (only limited by your budget)" willpower. Sure, three years ago they were GPT-3.5, basically useless. And now they're Opus 4.6. I wonder what the next few years will bring.

tptacek 7 hours ago [-]
So read the code.
datsci_est_2015 7 hours ago [-]
Cool, code review continues to be one of the biggest bottlenecks in our org, with or without agentic AI pumping out 1k LOC per hour.
alexsmirnov 3 hours ago [-]
For me, AI is the best for code research and review

Since some team members started using AI without care, I did create bunch of agents/skills/commands and custom scripts for claude code. For each PR, it collects changes by git log/diff, read PR data and spin bunch of specialized agents to check code style, architecture, security, performance, and bugs. Each agent armed with necessary requirement documents, including security compliance files. False positives are rare, but it still misses some problems. No PR with ai generated code passes it. If AI did not find any problems, I do manual review.

tptacek 6 hours ago [-]
Ok? You still have to read the code.
bigstrat2003 3 hours ago [-]
You're missing the point. The point is that reading the code is more time consuming than writing it, and has always been thus. Having a machine that can generate code 100x faster, but which you have to read carefully to make sure it hasn't gone off the rails, is not an asset. It is a liability.
tptacek 3 hours ago [-]
Tell that to Mitchell Hashimoto.
IhateAI 6 hours ago [-]
I didn't get into creating software so I could read plagiarism laundering machines output. Sorry, miss me with these takes. I love using my keyboard, and my brain.
dotancohen 5 hours ago [-]
So you have a hobby.

I have a profession. Therefore I evaluate new tools. Agents coding I've introduced into my auxiliary tool forgings (one-off bash scripts) and personal projects, and I'm just now comfortable to introduce into my professional work. But I still evaluate every line.

raw_anon_1111 5 hours ago [-]
I love for companies to pay me money that I can in turn exchange for food, clothes and shelter.
acessoproibido 6 hours ago [-]
So then type the code as well and read it after. Why are you mad
codyb 6 hours ago [-]
I think this is the crux of why, when used as an enhancement to solo productivity, you'll have a pretty strict upper bound on productivity gains given that it takes experienced engineers to review code that goes out at scale.

That being said, software quality seems to be decreasing, or maybe it's just cause I use a lot of software in a somewhat locked down state with adblockers and the rest.

Although, that wouldn't explain just how badly they've murdered the once lovely iTunes (now Apple Music) user interface. (And why does CMD-C not pick up anything 15% of the time I use it lately...)

Anyways, digressions aside... the complexity in software development is generally in the organizational side. You have actual users, and then you have people who talk to those users and try to see what they like and don't like in order to distill that into product requirements which then have to be architected, and coordinated (both huge time sinks) across several teams.

Even if you cut out 100% of the development time, you'd still be left with 80% of the timeline.

Over time though... you'll probably see people doing what I do all day (which is move around among many repositories (although I've yet to use the AI much, got my Cursor license recently and am gonna spin up some POCs that I want to see soon)), enabled by their use of AI to quickly grasp what's happening in the repo, and the appropriate places to make changes.

Enabling developers to complete features from tip to tail across deep, many pronged service architectures would could bring project time down drastically and bring project management, and cross team coordination costs down tremendously.

Similarly, in big companies, the hand is often barely aware at best of the foot. And space exploration is a serious challenge. Often folk know exactly one step away, and rely on well established async communication channels which also only know one step further. Principal engineers seem to know large amounts about finite spaces and are often in the dark small hops away to things like the internal tooling for the systems they're maintaining (and often not particularly great at coming in to new spaces and thinking with the same perspective... no we don't need individual micro services for every 12 request a month admin api group we want to set up).

Once systems can take a feature proposal and lay out concrete plans which each little kingdom can give a thumbs up or thumbs down to for further modifications, you can again reduce exploration, coordination, and architecture time down.

Sadly, seems like User Experience design is an often terribly neglected part of our profession. I love the memes about an engineer building the perfect interface like a water pitcher only for the person to position it weirdly in order to get a pour out of the fill hole or something. Lemme guess how many users you actually talked to (often zero), and how many layers of distillation occurred before you received a micro picture feature request that ends up being build and taking input from engineers with no macro understanding of a user's actual needs, or day to day.

And who often are much more interested in perfecting some little algorithm thank thinking about enabling others.

So my money is on money flowing to... - People who can actually verify system integrity, and can fight fires and bugs (but a lot of bug fixing will eventually becoming prompting?) - Multi-talented individuals who can say... interact with users well enough to understand their needs as well as do a decent job verifying system architecture and security

It's outside of coding where I haven't seen much... I guess people use it to more quickly scaffold up expense reports, or generate mocks. So, lots of white collar stuff. But... it's not like the experience of shopping at the supermarket has changed, or going to the movies, or much of anything else.

zamadatix 4 hours ago [-]
Should AI tools use memory safe tabs or spaces for indentation? :)

It is a shame it's become such a polarized topic. Things which actually work fine get immediately bashed by large crowds at the same time things that are really not there get voted to the moon by extremely eager folks. A few years from now I expect I'll be thinking "man, there was some really good stuff I missed out on because the discussions about it were so polarized at the time. I'm glad that has cleared up significantly!"

beoberha 10 hours ago [-]
Your sentiment resonates with me a lot. I wonder what we’ll consider the inflection point 10 years from now. It seemed like the zeitgeist was screaming about scaling limits and running out of training data, then we got Claude code, sonnet 4.5, then Opus 4.5 and no ones looked back since.
libraryofbabel 9 hours ago [-]
I wonder too. It might be that progress on the underlying models is going to plateau, or it might be that we haven't yet reached what in retrospect will be the biggest inflection point. Technological developments can seem to make sense in hindsight as a story of continuous progress when the dust has settled and we can write and tell the history, but when you go back and look at the full range of voices in the historical sources you realize just how deeply nothing was clear to anyone at all at the time it was happening because everyone was hurtling into the unknown future with a fog of war in front of them. In 1910 I'd say it would have been perfectly reasonable to predict airplanes would remain a terrifying curiosity reserved for daredevils only (and people did); or conversely, in the 1960s a lot of commentators thought that the future of passenger air travel in the 70s and 80s would be supersonic jets. I keep this in mind and don't really pay too much attention to over-confident predictions about the technological future.
majormajor 8 hours ago [-]
GPT-4 showed the potential but the automated workflows (context management, loops, test-running) and pure execution speed to handle all that "reasoning"/workflows (remember watching characters pop in slowly in GPT-4 streaming API response calls) are gamechangers.

The workflow automation and better (and model-directed) context management are all obvious in retrospect but a lot of people (like myself) were instead focused on IDE integration and such vs `grep` and the like. Maybe multi-agent with task boards is the next thing, but it feels like that might also start to outrun the ability to sensibly design and test new features for non-greenfield/non-port projects. Who knows yet.

I think it's still very valuable for someone to dig in to the underlying models periodically (insomuch as the APIs even expose the same level of raw stuff anymore) to get a feeling for what's reliable to one-shot vs what's easily correctable by a "ran the tests, saw it was wrong, fixed it" loop. If you don't have a good sense of that, it's easy to get overambitious and end up with something you don't like if you're the sort of person who cares at all about what the code looks like.

ianm218 3 hours ago [-]
Isn’t there something off about calling predictions about the future, that aren’t possible with current tech, hype? Like people predicted AI agents would be this huge change, they were called hype since earlier models were so unreliable, and now they are mostly right as ai agents work like a mid level engineer. And clearly super human in some areas.
otabdeveloper4 3 hours ago [-]
> ai agents work like a mid level engineer

They do not.

> And clearly super human in some areas.

Sure, if you think calculators or bicycles are "superhuman technology".

Lay off the hype pills.

Lich 9 hours ago [-]
Is there any reason to use Claude Code specifically over Codex or Gemini? I’ve found the both Codex and Gemini similar in results, but I never tried Claude because of I keep hearing usage runs out so fast on pro plans and there’s no free trial for the CLI.
libraryofbabel 9 hours ago [-]
I mostly mentioned Claude Code because it's what Mitchell first tried according to his post, and it's what I personally use. From what I hear Codex is pretty comparable; it has a lot of fans. There are definitely some differences and strengths and weaknesses of both the CLIs and the underlying LLMs that others who use more than one tool might want to weight in on, but they're all fairly comparable. (Although, we'll see how the new models released from Anthropic and OpenAI today stack up.) Codex and Gemini CLI are basically Claude Code clones with different LLMs behind them, after all.
majormajor 8 hours ago [-]
IME Gemini is pretty slow in comparison to Claude - but hey, it's super cheap at least.

But that speed makes a pretty significant difference in experience.

If you wait a couple minutes and then give the model a bunch of feedback about what you want done differently, and then have to wait again, it gets annoying fast.

If the feedback loop is much tighter things feel much more engaging. Cursor is also good at this (investigate and plan using slower/pricier models, implement using fast+cheap ones).

chrysoprace 7 hours ago [-]
I think for a lot of people the turn off is the constant churn and the hype cycle. For a lot of people, they just want to get things done and not have to constantly keep on top of what's new or SOTA. Are we still using MCPs or are we using Skills now? Not long ago you had to know MCP or you'd be left behind and you definitely need to know MCP UI or you'll be left behind. I think. It just becomes really tiring, especially with all the FUD.

I'm embracing LLMs but I think I've had to just pick a happy medium and stick with Claude Code with MCPs until somebody figures out a legitimate way to use the Claude subscription with open source tools like OpenCode, then I'll move over to that. Or if a company provides a model that's as good value that can be used with OpenCode.

kaibee 3 hours ago [-]
It reminds me a lot of 3D Printing tbh. Watching all these cool DIY 3d printing kits evolve over years, I remember a few times I'd checked on costs to build a DIY one. They kept coming down, and down, and then around the same time as "Build a 3d printer for $200 (some assembly required)!" The Bambu X1C was announced/released, for a bit over a grand iirc? And its whole selling point was that it was fast and worked, out of the box. And so I bought one and made a bunch of random one-off-things that solved _my_ specific problem, the way I wanted it solved. Mostly in the form of very specific adapter plates that I could quickly iterate on and random house 'wouldn't it be nice if' things.

That's kind of where AI-agent-coding is now too, though... software is more flexible.

re-thc 5 hours ago [-]
> For a lot of people, they just want to get things done and not have to constantly keep on top of what's new or SOTA

That hasn’t been tech for a long time.

Frontend has been changing forever. React and friends have new releases all the time. Node has new package managers and even Deno and Bun. AWS keeps changing things.

bigstrat2003 3 hours ago [-]
You really shouldn't use the absolute hellscape of churn that is web dev as an example of broader industry trends. No other sub-field of tech is foolish enough to chase hype and new tools the way web dev is.
bonesss 54 minutes ago [-]
I think the web/system dichotomy is also a major conflating factor for LLM discussions.

A “few hundred lines of code” in Rust or Haskell can be bumping into multiple issues LLM assisted coding struggles with. Moving a few buttons on a website with animations and stuff through multiple front end frameworks may reasonably generate 5-10x that much “code”, but of an entirely different calibre.

3,000 lines a day of well-formatted HTML template edits, paired with a reloadable website for rapid validation, is super digestible, while 300 lines of code per day into curl could be seen as reckless.

chrysoprace 5 hours ago [-]
There's a point at which these things become Good Enough though, and don't bottleneck your capacity to get things done.

To your point, React, while it has new updates, hasn't changed the fundamentals since 16.8.0 (introduction of hooks) and that was 7 years ago. Yes there are new hooks, but they typically build on older concepts. AWS hasn't deprecated any of our existing services at work (besides maybe a MySQL version becoming EOL) in the last 4 years that I've worked at my current company.

While I prefer pnpm (to not take up my MacBook's inadequate SSD space), you can still use npm and get things done.

I don't need to keep obsessing over whether Codex or Claude have a 1 point lead in a gamed benchmark test so long as I'm still able to ship features without a lot of churn.

arcxi 10 hours ago [-]
but annoying hype is exactly the issue with AI in my eyes. I get it's a useful tool in moderation and all, but I also experience that management values speed and quantity of delivery above all else, and hype-driven as they are I fear they will run this industry to the ground and we as users and customers will have to deal with the world where software is permanently broken as a giant pile of unmaintainable vibe code and no experienced junior developers to boot.
acessoproibido 6 hours ago [-]
>management values speed and quantity of delivery above all else

I don't know about you but this has been the case for my entire career. Mgmt never gave a shit about beautiful code or tech debt or maintainability or how enlightened I felt writing code.

linuxrocks123 3 hours ago [-]
[dead]
whatifnomoney 10 hours ago [-]
[dead]
mjr00 12 hours ago [-]
> Break down sessions into separate clear, actionable tasks. Don't try to "draw the owl" in one mega session.

This is the key one I think. At one extreme you can tell an agent "write a for loop that iterates over the variable `numbers` and computes the sum" and they'll do this successfully, but the scope is so small there's not much point in using an LLM. On the other extreme you can tell an agent "make me an app that's Facebook for dogs" and it'll make so many assumptions about the architecture, code and product that there's no chance it produces anything useful beyond a cool prototype to show mom and dad.

A lot of successful LLM adoption for code is finding this sweet spot. Overly specific instructions don't make you feel productive, and overly broad instructions you end up redoing too much of the work.

sho_hn 12 hours ago [-]
This is actually an aspect of using AI tools I really enjoy: Forming an educated intuition about what the tool is good at, and tastefully framing and scoping the tasks I give it to get better results.

It cognitively feels very similar to other classic programming activities, like modularization at any level from architecture to code units/functions, thoughtfully choosing how to lay out and chunk things. It's always been one of the things that make programming pleasurable for me, and some of that feeling returns when slicing up tasks for agents.

bandrami 6 hours ago [-]
"Become better at intuiting the behavior of this non-deterministic black box oracle maintained by a third party" just isn't a strong professional development sell for me, personally. If the future of writing software is chasing what a model trainer has done with no ability to actually change that myself I don't think that's going to be interesting to nearly as many people.
mjr00 2 hours ago [-]
It sounds like you're talking more about "vibe coding" i.e. just using LLMs without inspecting the output. That's neither what the article nor the people to whom you're replying are saying. You can (and should) heavily review and edit LLM generated code. You have the full ability to change it yourself, because the code is just there and can be edited!
bandrami 2 hours ago [-]
And yet the comments are chock full of cargo-culting about different moods of the oracle and ways to get better output.
chii 4 hours ago [-]
Whether it's interesting or not is irrelevant to whether it produces usable output that could be economically valuable.
bandrami 4 hours ago [-]
Yeah, still waiting for something to ship before I form a judgement on that
satvikpendem 2 hours ago [-]
Claude Code is made with Anthropic's models and is very commercially successful.
bandrami 2 hours ago [-]
Something besides AI tooling. This isn't Amway.
allenu 11 hours ago [-]
I agree that framing and scoping tasks is becoming a real joy. The great thing about this strategy is there's a point at which you can scope something small enough that it's hard for the AI to get it wrong and it's easy enough for you as a human to comprehend what it's done and verify that it's correct.

I'm starting to think of projects now as a tree structure where the overall architecture of the system is the main trunk and from there you have the sub-modules, and eventually you get to implementations of functions and classes. The goal of the human in working with the coding agent is to have full editorial control of the main trunk and main sub-modules and delegate as much of the smaller branches as possible.

Sometimes you're still working out the higher-level architecture, too, and you can use the agent to prototype the smaller bits and pieces which will inform the decisions you make about how the higher-level stuff should operate.

audience_mem 10 hours ago [-]
[Edit: I may have been replying to another comment in my head as now I re-read it and I'm not sure I've said the same thing as you have. Oh well.]

I agree. This is how I see it too. It's more like a shortcut to an end result that's very similar (or much better) than I would've reached through typing it myself.

The other day I did realise that I'm using my experience to steer it away from bad decisions a lot more than I noticed. It feels like it does all the real work, but I have to remember it's my/our (decades of) experience writing code playing a part also.

I'm genuinely confused when people come in at this point and say that it's impossible to do this and produce good output and end results.

meowface 10 hours ago [-]
I feel the same, but, also, within like three years this might look very different. Maybe you'll give the full end-to-end goal upfront and it just polls you when it needs clarification or wants to suggest alternatives, and it self-manages cleanly self-delegating.

Or maybe something quite different but where these early era agentic tooling strategies still become either unneeded or even actively detrimental.

zxor 9 hours ago [-]
> it just polls you when it needs clarification

I think anyone who has worked on a serious software project would say, this means it would be polling you constantly.

Even if we posit that an LLM is equivalent to a human, humans constantly clarify requirements/architecture. IMO on both of those fronts the correct path often reveals itself over time, rather than being knowable from the start.

So in this scenario it seems like you'd be dealing with constant pings and need to really make sure you're understanding of the project is growing with the LLM's development efforts as well.

To me this seems like the best-case of the current technology, the models have been getting better and better at doing what you tell it in small chunks but you still need to be deciding what it should be doing. These chunks don't feel as though they're getting bigger unless you're willing to accept slop.

mapontosevenths 7 hours ago [-]
> Break down sessions into separate clear, actionable tasks.

What this misses, of course, is that you can just have the agent do this too. Agent's are great at making project plans, especially if you give them a template to follow.

iamacyborg 11 hours ago [-]
> On the other extreme you can tell an agent "make me an app that's Facebook for dogs" and it'll make so many assumptions about the architecture, code and product that there's no chance it produces anything useful beyond a cool prototype to show mom and dad.

Amusingly, this was my experience in giving Lovable a shot. The onboarding process was literally just setting me up for failure by asking me to describe the detailed app I was attempting to build.

Taking it piece by piece in Claude Code has been significantly more successful.

andai 3 hours ago [-]
> the scope is so small there's not much point in using an LLM

Actually that's how I did most of my work last year. I was annoyed by existing tools so I made one that can be used interactively.

It has full context (I usually work on small codebases), and can make an arbitrary number of edits to an arbitrary number of files in a single LLM round trip.

For such "mechanical" changes, you can use the cheapest/fastest model available. This allows you to work interactively and stay in flow.

(In contrast to my previous obsession with the biggest, slowest, most expensive models! You actually want the dumbest one that can do the job.)

I call it "power coding", akin to power armor, or perhaps "coding at the speed of thought". I found that staying actively involved in this way (letting LLM only handle the function level) helped keep my mental model synchronized, whereas if I let it work independently, I'd have to spend more time catching up on what it had done.

I do use both approaches though, just depends on the project, task or mood!

helenite 24 minutes ago [-]
Do you have the tool open sourced somewhere? I have been thinking of using something similar
10 hours ago [-]
jedbrooke 12 hours ago [-]
so many times I catch myself asking a coding agent e.g “please print the output” and it will update the file with “print (output)”.

Maybe there’s something about not having to context switch between natural language and code just makes it _feel_ easier sometimes

apercu 11 hours ago [-]
I actually enjoy writing specifications. So much so that I made it a large part of my consulting work for a huge part of my career. SO it makes sense that working with Gen-AI that way is enjoyable for me.

The more detailed I am in breaking down chunks, the easier it is for me to verify and the more likely I am going to get output that isn't 30% wrong.

kcorbitt 10 hours ago [-]
And lately, the sweet spot has been moving upwards every 6-8 weeks with the model release cycle.
oulipo2 11 hours ago [-]
Exactly. The LLMs are quite good at "code inpainting", eg "give me the outline/constraints/rules and I'll fill-in the blanks"

But not so good at making (robust) new features out of the blue

EastLondonCoder 12 hours ago [-]
This matches my experience, especially "don’t draw the owl" and the harness-engineering idea.

The failure mode I kept hitting wasn’t just "it makes mistakes", it was drift: it can stay locally plausible while slowly walking away from the real constraints of the repo. The output still sounds confident, so you don’t notice until you run into reality (tests, runtime behaviour, perf, ops, UX).

What ended up working for me was treating chat as where I shape the plan (tradeoffs, invariants, failure modes) and treating the agent as something that does narrow, reviewable diffs against that plan. The human job stays very boring: run it, verify it, and decide what’s actually acceptable. That separation is what made it click for me.

Once I got that loop stable, it stopped being a toy and started being a lever. I’ve shipped real features this way across a few projects (a git like tool for heavy media projects, a ticketing/payment flow with real users, a local-first genealogy tool, and a small CMS/publishing pipeline). The common thread is the same: small diffs, fast verification, and continuously tightening the harness so the agent can’t drift unnoticed.

ricardobeat 9 hours ago [-]
No harm meant, but your writing is very reminiscent of an LLM. It is great actually, there is just something about it - "it wasn't.. it was", "it stopped being.. and started". Claude and ChatGPT seem to love these juxtapositions. The triplets on every other sentence. I think you are a couple em-dashes away from being accused of being a bot.

These patterns seem to be picking up speed in the general population; makes the human race seem quite easily hackable.

pixl97 7 hours ago [-]
>makes the human race seem quite easily hackable.

If the human race were not hackable then society would not exist, we'd be the unchanging crocodiles of the last few hundred million years.

Have you ever found yourself speaking a meme? Had a catchy toon repeating in your head? Started spouting nation state level propaganda? Found yourself in crowd trying to burn a witch at the stake?

Hacking the flow of human thought isn't that hard, especially across populations. Hacking any one particular humans thoughts is harder unless you have a lot of information on them.

protocolture 8 hours ago [-]
>The failure mode I kept hitting wasn’t just "it makes mistakes", it was drift: it can stay locally plausible while slowly walking away from the real constraints of the repo. The output still sounds confident, so you don’t notice until you run into reality (tests, runtime behaviour, perf, ops, UX).

Yeah I would get patterns where, initial prototypes were promising, then we developed something that was 90% close to design goals, and then as we try to push in the last 10%, drift would start breaking down, or even just forgetting, the 90%.

So I would start getting to 90% and basically starting a new project with that as the baseline to add to.

apitman 2 hours ago [-]
Would love to hear more about your geneology app.
bdangubic 12 hours ago [-]
This is the most common answer from people that are rocking and rolling with AI tools but I cannot help but wonder how is this different from how we should have built software all along. I know I have been (after 10+ years…)
EastLondonCoder 11 hours ago [-]
I think you are right, the secret is that there is no secret. The projects I have been involved with thats been most successful was using these techniques. I also think experience helps because you develop a sense that very quickly knows if the model wants to go in a wonky direction and how a good spec looks like.

With where the models are right now you still need a human in the loop to make sure you end up with code you (and your organisation) actually understands. The bottle neck has gone from writing code to reading code.

sksisksbbs 10 hours ago [-]
> The bottle neck has gone from writing code to reading code.

This has always been the bottleneck. Reviewing code is much harder and gets worse results than writing it, which is why reviewing AI code is not very efficient. The time required to understand code far outstrips the time to type it.

Most devs don’t do thorough reviews. Check the variable names seem ok, make sure there’s no obvious typos, ask for a comment and call it good. For a trusted teammate this is actually ok and why they’re so valuable! For an AI, it’s a slot machine and trusting it is equivalent to letting your coworkers/users do your job so you can personally move faster.

tpoacher 44 minutes ago [-]
> This blog post was fully written by hand, in my own words.

This reminded me of back when wysiwyg web editors started becoming a thing, and coders started adding those "Created in notepad" stickers to their webpages, to point out they were 'real' web developers. Fun times.

simgt 56 minutes ago [-]
Very nice. As a consequence of this new way of working I'm using `git worktree` and diffview all the time.

For more on the "harness engineering", see what Armin Ronacher and Mario Zechner are doing with pi: https://lucumr.pocoo.org/2026/1/31/pi/ https://mariozechner.at/posts/2025-11-30-pi-coding-agent/

> I really don't care one way or the other if AI is here to stay3, I'm a software craftsman that just wants to build stuff for the love of the game.

I suspect having three comma on one's bank account helps being very relaxed about the outcome ;)

senko 11 hours ago [-]
For those wondering how that looks in practice, here's one of OP's past blog posts describing a coding session to implement a non-trivial feature: https://mitchellh.com/writing/non-trivial-vibing (covered on HN here: https://news.ycombinator.com/item?id=45549434)
tigerlily 27 minutes ago [-]
OT but, the style. The journey. What is it? What does this remind me of?

Flowers for Algernon.

Or at least the first half. I don't wanna see what it looks like when AI capabilities start going in reverse.

But I want to know.

sho_hn 12 hours ago [-]
Much more pragmatic and less performative than other posts hitting frontpage. Good article.
alterom 12 hours ago [-]
Finally, a step-by-step guide for even the skeptics to try to see what spot the LLM tools have in their workflows, without hype or magic like I vibe-coded an entire OS, and you can too!.
scarrilho 8 hours ago [-]
With so much noise in the AI world and constant model updates (just today GPT-5.3-Codex and Claude Opus 4.6 were announced), this was a really refreshing read. It’s easy to relate to his phased approach to finding real value in tooling and not just hype. There are solid insights and practical tips here. I’m increasingly convinced that the best way not to get overwhelmed is to set clear expectations for what you want to achieve with AI and tailor how you use it to work for you, rather than trying to chase every new headline. Very refreshing.
keyle 11 hours ago [-]
It's amusing how everyone seems to be going through the same journey.

I do run multiple models at once now. On different parts of the code base.

I focus solely on the less boring tasks for myself and outsource all of the slam dunk and then review. Often use another model to validate the previous models work while doing so myself.

I do git reset still quite often but I find more ways to not get to that point by knowing the tools better and better.

Autocompleting our brains! What a crazy time.

underdeserver 11 hours ago [-]
> At a bare minimum, the agent must have the ability to: read files, execute programs, and make HTTP requests.

That's one very short step removed from Simon Willison's lethal trifecta.

smj-edison 6 hours ago [-]
I will say one thing Claude does is it doesn't run a command until you approve it, and you can choose between a one-time approval and always allowing a command's pattern. I usually approve the simple commands like `zig build test`, since I'm not particularly worried about the test harness. I believe it also scopes file reading by default to the current directory.
tehlike 4 hours ago [-]
A lot of people run the claude with --dangerously-skip-permissions
recursive 11 hours ago [-]
I'm definitely not running that on my machine.
brandonpaiz 1 hours ago [-]
Same, but I felt okay sticking my code base in a VM and then letting an agent run there. I’d say it worked well
margalabargala 10 hours ago [-]
The way this is generally implemented is that agents have the ability to request a tool use. Then you confirm "yes, you may run this grep".
5 hours ago [-]
kaffekaka 3 hours ago [-]
> Context switching is very expensive. In order to remain efficient, I found that it was my job as a human to be in control of when I interrupt the agent, not the other way around. Don't let the agent notify you.

This I have found to be important too.

tppts 4 hours ago [-]
So does everyone just run with giving full permissions on Claude code these days? It seems like I’m constantly coming back to CC to validate that it’s not running some bash that’s going to nuke my system. I would love to be able to fully step away but it feels like I can’t.
apitman 2 hours ago [-]
I run my agents with full permissions in containers. Feels like a reasonable tradeoff. Bonus is I can set up each container with exactly the stack needed.
apetresc 4 hours ago [-]
Honest question, when was the last time you caught it trying to use a command that was going to "nuke your system"?
tppts 4 hours ago [-]
“Nuke” is maybe too strong of a word, but it has not been uncommon for me to see it trying to install specific versions of languages on my machine, or services I intentionally don’t have configured, or sometimes trying to force npm when I’m using bun, etc.
dymk 3 hours ago [-]
Maybe once a month
energy123 6 hours ago [-]
> Immediately cease trying to perform meaningful work via a chatbot.

That depends on your budget. To work within my pro plan's codex limits, I attach the codebase as a single file to various chat windows (GPT 5.2 Thinking - Heavy) and ask it to find bugs/plan a feature/etc. Then I copy the dense tasklist from chat to codex for implementation. This reduces the tokens that codex burns.

Also don't sleep on GPT 5.2 Pro. That model is a beast for planning.

rhubarbtree 1 hours ago [-]
If the author is here, please could you also confirm you’ve never been paid by any AI company, marketing representative, community programme, in any shape or form?
skrebbel 5 minutes ago [-]
I don't think you appreciate how un-bribeable this particular author is, and I don't just mean in a moral sense.
fergie 17 minutes ago [-]
He explicitly said "I don't work for, invest in, or advise any AI companies." in the article.

But yes, Hashimoto is a high profile CEO/CTO who may well have an indirect, or near-future interest in talking up AI. HN articles extoling the productivity gains of Claude on HN do generally tend to be from older, managerial types (make of that what you will).

simianwords 1 hours ago [-]
Bit strange that you are skeptical by default.
emil-lp 1 hours ago [-]
Isn't skeptical by default quite reasonable?
simianwords 1 hours ago [-]
Probably exhausting to be that way. The author is well respected and well known and has a good track record. My immediate reaction wasn’t to question that he spoke in good faith.
rhubarbtree 48 minutes ago [-]
I don’t know the author, and am suspicious of the amount of astroturfing that has gone on with AI. This article seems reasonable so I looked for a disclaimed and found it oddly worded, hence the request for clarification.
asdfaslkj353 1 hours ago [-]
[dead]
seemaze 6 hours ago [-]
What a lovely read. Thank you for sharing your experience.

The human-agent relationship described in the article made me wonder: are natural, or experienced, managers having more success with AI as subordinates than people without managerial skill? Are AI agents enormously different than arbitrary contractors half a world away where the only communication is daily text exchanges?

noosphr 6 hours ago [-]
I've been building systems like what the OP is using since gpt3 came out.

This is the honeymoon phase. You're learning the ins and outs of the specific model you're using and becoming more productive. It's magical. Nothing can stop you. Then you might not be improving as fast as you did at the start, but things are getting better every day. Or maybe every week. But it's heaps better than doing it by hand because you have so much mental capacity left.

Then a new release comes up. An arbitrary fraction of your hard earned intuition is not only useless but actively harmful to getting good results with the new models. Worse you will never know which part it is without unlearning everything you learned and starting over again.

I've had to learn the quirks of three generations of frontier families now. It's not worth the hassle. I've gone back to managing the context window in Emacs because I can't be bothered to learn how to deal with another model family that will be thrown out in six months. Copy and paste is the universal interface and being able to do surgery on the chat history is still better than whatever tooling is out there.

Unironically learning vim or Emacs and the standard Unix code tools is still the best thing you can do to level up your llm usage.

tudelo 5 hours ago [-]
First off, appreciate you sharing your perspective. I just have a few questions.

> I've gone back to managing the context window in Emacs because I can't be bothered to learn how to deal with another model family that will be thrown out in six months.

Can you expand more on what you mean by that? I'm a bit of a noob on llm enabled dev work. Do you mean that you will kick off new sessions and provide a context that you manage yourself instead of relying on a longer running session to keep relevant information?

> Unironically learning vim or Emacs and the standard Unix code tools is still the best thing you can do to level up your llm usage.

I appreciate your insight but I'm failing to understand how exactly knowing these tools increases performance of llms. Is it because you can more precisely direct them via prompts?

fhd2 2 hours ago [-]
I can't speak for parent, but I use gptel, and it sounds like they do as well. It has a number of features, but primarily it just gives you a chat buffer you can freely edit at any time. That gives you 100% control over the context, you just quickly remove the parts of the conversation where the LLM went off the rails and keep it clean. You can replace or compress the context so far any way you like.

While I also use LLMs in other ways, this is my core workflow. I quickly get frustrated when I can't _quickly_ modify the context.

If you have some mastery over your editor, you can just run commands and post relevant output and make suggested changes to get an agent like experience, at a speed not too different from having the agent call tools. But you retain 100% control over the context, and use a tiny fraction of the tokens OpenCode and other agents systems would use.

It's not the only or best way to use LLMs, but I find it incredibly powerful, and it certainly has it's place.

A very nice positive effect I noticed personally is that as opposed to using agents, I actually retain an understanding of the code automatically, I don't have to go in and review the work, I review and adjust on the fly.

noosphr 5 hours ago [-]
LLMs work on text and nothing else. There isn't any magic there. Just a limited context window on which the model will keep predicting the next token until it decides that it's predicted enough and stop.

All the tooling is there to manage that context for you. It works, to a degree, then stops working. Your intuition is there to decide when it stops working. This intuition gets outdated with each new release of the frontier model and changes in the tooling.

The stateless API with a human deciding what to feed it is much more efficient in both cost and time as long as you're only running a single agent. I've yet to see anyone use multiple agents to generate code successfully (but I have used agent swarms for unstructured knowledge retrieval).

The Unix tools are there for you to progra-manually search and edit the code base copy/paste into the context that you will send. Outside of Emacs (and possibly vim) with the ability to have dozens of ephemeral buffers open to modify their output I don't imagine they will be very useful.

Or to quote the SICP lectures: The magic is that there is no magic.

sunshinekitty 5 hours ago [-]
> I've been building systems like what the OP is using since pgt3 came out.

OP is also a founder of Hashicorp, so.. lol.

> This is the honeymoon phase.

No offense but you come across as if you didn’t read the article.

noosphr 5 hours ago [-]
You come across as if you didn't read my post.

I'll wait for OP to move their workflow to Claude 7.0 and see if they still feel as bullish on AI tools.

People who are learning a new AI tool for the first time don't realzie that they are just learning quirks of the tool and underlying and not skills that generalize. It's not until you've done it a few times that you realzie you've wasted more than 80% of your time on a model that is completely useless and will be sunset in 6 months.

cal_dent 11 hours ago [-]
Just wanted to say that was a nice and very grounded write up; and as a result very informative. Thank you. More stuff like this is a breath of fresh air in a landscape that has veered into hyperbole territory both in the for and against ai sides
raphinou 12 hours ago [-]
I recently also reflected on the evolution of my use of ai in programming. Same evolution, other path. If anyone is interested: https://www.asfaload.com/blog/ai_use/
zubspace 10 hours ago [-]
It's so sad that we're the ones who have to tell the agent how to improve by extending agent.md or whatever. I constantly have to tell it what I don't like or what can be improved or need to request clarifications or alternative solutions.

This is what's so annoying about it. It's like a child that does the same errors again and again.

But couldn't it adjust itself with the goal of reducing the error bit by bit? Wouldn't this lead to the ultimate agent who can read your mind? That would be awesome.

audience_mem 10 hours ago [-]
> It's so sad that we're the ones who have to tell the agent how to improve by extending agent.md or whatever.

Your improvement is someone else's code smell. There's no absolute right or wrong way to write code, and that's coming from someone who definitely thinks there's a right way. But it's my right way.

Anyway, I don't know why you'd expect it to write code the way you like after it's been trained on the whole of the Internet & the the RLHF labelers' preferences and the reward model.

Putting some words in AGENTS.md hardly seems like the most annoying thing.

tip: Add a /fix command that tells it to fix $1 and then update AGENTS.md with the text that'd stop it from making that mistake in the future. Use your nearest LLM to tweak that prompt. It's a good timesaver.

pixl97 7 hours ago [-]
While this may be the end goal, I do think humanity needs to take the trip along with AI to this point.

A mind reading ultimate agent sounds more like a deity, and there are more than enough fables warning one not to create gods because things tend to go bad. Pumping out ASI too quickly will cause massive destabilization and horrific war. Not sure who against really either. Could be us humans against the ASI, could be the rich humans with ASI against us. Anyway about it, it would represent a massive change in the world order.

cactusplant7374 10 hours ago [-]
It is not a mind reader. I enjoy giving it feedback because it shows I am in charge of the engineering.

I also love using it for research for upcoming features. Research + pick a solution + implement. It happens so fast.

josh-sematic 9 hours ago [-]
This is yet one more indication to me that the winds have shifted with regards to the utility of the “agent” paradigm of coding with an LLM. With all the talk around Opus 4.5 I decided to finally make the jump there myself and haven’t yet been disappointed (though admittedly I’m starting it on some pretty straightforward stuff).
butler14 12 hours ago [-]
I'd be interested to know what agents you're using. You mentioned Claude and GPT in passing, but don't actually talk about which you're using or for which tasks.
mwigdahl 12 hours ago [-]
Good article! I especially liked the approach to replicate manual commits with the agent. I did not do that when learning but I suspect I'd have been much better off if I had.
davidw 11 hours ago [-]
This seems like a pretty reasonable approach that charts a course between skepticism and "it's a miracle".

I wonder how much all this costs on a monthly basis?

tptacek 11 hours ago [-]
As long as we're on the same page that what he's describing is itself a miracle.
12_throw_away 2 hours ago [-]
Take your religion somewhere else please.
henry_bone 9 hours ago [-]
LLMs are not for me. My position is that the advantage we humans have over the rest of the natural world, is our minds. Our ability to think, create and express ideas is what separates us from the rest of the animal kingdom. Once we give that over to "thinking" machines, we weaken ourselves, both individually and as a species.

That said, I've given it a go. I used zed, which I think is a pretty great tool. I bought a pro subscription and used the built in agent with Claude Sonnet 4.x and Opus. I'm a Rails developer in my day job, and, like MitchellH and many others, found out fairly quickly that tasks for the LLM need to be quite specific and discrete. The agent is great a renames and minor refactors, but my preferred use of the agent was to get it to write RSpec tests once I'd written something like a controller or service object.

And generally, the LLM agent does a pretty great job of this.

But here's the rub: I found that I was losing the ability to write rspec.

I went to do it manually and found myself trying to remember API calls and approaches required to write some specs. The feeling of skill leaving me was quite sobering and marked my abandonment of LLMs and Zed, and my return to neovim, agent-free.

The thing is, this is a common experience generally. If you don't use it, you lose it. It applies to all things: fitness, language (natural or otherwise), skills of all kinds. Why should it not apply to thinking itself.

Now you may write me and my experience off as that of a lesser mind, and that you won't have such a problem. You've been doing it so long that it's "hard-wired in" by now. Perhaps.

It's in our nature to take the path of least resistance, to seek ease and convenience at every turn. We've certainly given away our privacy and anonymity so that we can pay for things with our phones and send email for "free".

LLMs are the ultimate convenience. A peer or slave mind that we can use to do our thinking and our work for us. Some believe that the LLM represents a local maxima, that the approach can't get much better. I dunno, but as AI improves, we will hand over more and more thinking and work to it. To do otherwise would be to go against our very nature and every other choice we've made so far.

But it's not for me. I'm no MitchellH, and I'm probably better off performing the mundane activities of my work, as well as the creative ones, so as to preserve my hard-won knowledge and skills.

YMMV

I'll leave off with the quote that resonates the most with me as I contemplate AI:-

"I say your civilization, because as soon as we started thinking for you, it really became our civilization, which is, of course, what this is all about." -- Agent Smith "The Matrix"

luisgvv 4 hours ago [-]
I was using it the same way you just described but for C# and Angular and you're spot on. It feels amazing not having to memorize APIs and just let the AI even do code coverage near to 100%, however at some point I began noticing 2 things:

- When tests didn't work I had to check what was going on and the LLMs do cheat a lot with Volkswagen tests, so that began to make me skeptic even of what is being written by the agents

- When things were broken, spaghetti and awful code tends to be written in an obnoxius way it's beyond repairable and made me wish I had done it from scratch.

Thankfully I just tried using agents for tests and not for the actual code, but it makes me think a lot if "vibe coding" really produces quality work.

FeteCommuniste 7 hours ago [-]
AI adoption is being heavily pushed at my work and personally I do use it, but only for the really "boilerplate-y" kinds of code I've already written hundreds of times before. I see it as a way to offload the more "typing-intensive" parts of coding (where the bottleneck is literally just my WPM on the keyboard) so I have more time to spend on the trickier "thinking-intensive" parts.
bthornbury 7 hours ago [-]
AI is getting to the game-changing point. We need more hand-written reflections on how individuals are managing to get productivity gains for real (not a vibe coded app) software engineering.
e40 8 hours ago [-]
For those of working on large proprietary, in fringe languages as well, what can we do? Upload all the source code to the cloud model? I am really wary of giving it a million lines of code it’s never seen.
fix4fun 12 hours ago [-]
Thanks for sharing your experiences :)

You mentioned "harness engineering". How do you approach building "actual programmed tools" (like screenshot scripts) specifically for an LLM's consumption rather than a human's? Are there specific output formats or constraints you’ve found most effective?

taikahessu 10 hours ago [-]
Do you have any ideas on how to harness AI to only change specific parts of a system or workpiece? Like "I consider this part 80/100 done and only make 'meaningful' or 'new contributions' here" ...?
rthak 7 hours ago [-]
Now that the Nasdaq crashes, people switch from the stick to the carrot:

"Please let us sit down and have a reasonable conversation! I was a skeptic, too, but if all skeptics did what I did, they would come to Jesus as well! Oh, and pay the monthly Anthropic tithe!"

11 hours ago [-]
dudewhocodes 10 hours ago [-]
Refreshing to read a balanced opinion, from a person who has significant experience and grounding in the real world.
0xbadcafebee 11 hours ago [-]
> I'm not [yet?] running multiple agents, and currently don't really want to

This is the main reason to use AI agents, though: multitasking. If I'm working on some Terraform changes and I fire off an agent loop, I know it's going to take a while for it to produce something working. In the meantime I'm waiting for it to come back and pretend it's finished (really I'll have to fix it), so I start another agent on something else. I flip back and forth between the finished runs as they notify me. At the end of the day I have 5 things finished rather than two.

The "agent" doesn't have to be anything special either. Anything you can run in a VM or container (vscode w/copilot chat, any cli tool, etc) so you can enable YOLO mode.

apercu 11 hours ago [-]
I find it interesting that this thread is full of pragmatic posts that seem to honestly reflect the real limits of current Gen-Ai.

Versus other threads (here on HN, and especially on places like LinkedIn) where it's "I set up a pipeline and some agents and now I type two sentences and amazing technology comes out in 5 minutes that would have taken 3 devs 6 months to do".

jonathanstrange 11 hours ago [-]
There are so many stories about how people use agentic AI but they rarely post how much they spend. Before I can even consider it, I need to know how it will cost me per month. I'm currently using one pro subscription and it's already quite expensive for me. What are people doing, burning hundreds of dollars per month? Do they also evaluate how much value they get out of it?
JoshuaDavid 11 hours ago [-]
Low hundreds ($190 for me) but yes.
latchkey 11 hours ago [-]
I quickly run out of the JetBrains AI 35 monthly credits for $300/yr and spending an additional $5-10/day on top of that, mostly for Claude.

I just recently added in Codex, since it comes with my $20/mo subscription to GPT and that's lowering my Claude credit usage significantly... until I hit those limits at some point.

2012 + 300 + 5~200... so about $1500-$1600/year.

It is 100% worth it for what I'm building right now, but my fear is that I'll take a break from coding and then I'm paying for something I'm not using with the subscriptions.

I'd prefer to move to a model where I'm paying for compute time as I use it, instead of worrying about tokens/credits.

throwdbaaway 3 hours ago [-]
Not using Hot Aisle for inference?
latchkey 3 hours ago [-]
We're literally full. Just a few 1x GPUs available right now.

So far, I haven't been happy with any of the smaller coding models, they just don't compare to claude/codex.

whatifnomoney 10 hours ago [-]
[dead]
jeffrallen 11 hours ago [-]
> babysitting my kind of stupid and yet mysteriously productive robot friend

LOL, been there, done that. It is much less frustrating and demoralizing than babysitting your kind of stupid colleague though. (Thankfully, I don't have any of those anymore. But at previous big companies? Oh man, if only their commits were ONLY as bad as a bad AI commit.)

vonneumannstan 12 hours ago [-]
For the AI skeptics reading this, there is an overwhelming probability that Mitchell is a better developer than you. If he gets value out of these tools you should think about why you can't.
jorvi 11 hours ago [-]
The AI skeptics instead stick to hard data, which so far shows a 19% reduction in productivity when using AI.
simonw 11 hours ago [-]
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

> 1) We do NOT provide evidence that AI systems do not currently speed up many or most software developers. Clarification: We do not claim that our developers or repositories represent a majority or plurality of software development work.

> 2) We do NOT provide evidence that AI systems do not speed up individuals or groups in domains other than software development. Clarification: We only study software development.

> 3) We do NOT provide evidence that AI systems in the near future will not speed up developers in our exact setting. Clarification: Progress is difficult to predict, and there has been substantial AI progress over the past five years [3].

> 4) We do NOT provide evidence that there are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting. Clarification: Cursor does not sample many tokens from LLMs, it may not use optimal prompting/scaffolding, and domain/repository-specific training/finetuning/few-shot learning could yield positive speedup.

rhubarbtree 57 minutes ago [-]
Points 2 and 3 are irrelevant.

Point 1 is saying results may not generalise, which is not a counter claim. It’s just saying “we cannot speak for everyone”.

Point 4 is saying there may be other techniques that work better, which again is not a counter claim. It’s just saying “you may find bette methods.”

Those are standard scientific statements giving scope to the research. They are in no way contradicting their findings. To contradict their findings, you would need similarly rigorous work that perhaps fell into those scenarios.

Not pushing an opinion here, but if we’re talking about research then we should be rigorous and rationale by posting counter evidence. Anyone who has done serious research in software engineering knows the difficulties involved and that this study represents one set of data. But it is at least a rigorous set and not anecdata or marketing.

I for one would love a rigorous study that showed a reliable methodology for gaining generalised productivity gains with the same or better code quality.

raincole 10 hours ago [-]
There is no such hard data. It's just research done on 16 developers using Cursor and Sonnet 3.5.
recursive 11 hours ago [-]
Perhaps that's the reason. Maybe I'm just not a good enough developer. But that's still not actionable. It's not like I never considered being a better developer.
z0r 12 hours ago [-]
I'm not as good as Fabrice Bellard either but I don't let that bother me as I go about my day.
silisili 1 hours ago [-]
I mean, not to say he's not, but by what metric?

If by company success, then Zuckerberg and Musk are better than all of us.

If by millions made, as he likes to joke/brag about... Fabrice Bellard is an utter failure.

If by install base, the geniuses that made MS Teams are among the best.

None of this is to take away from the successes of the man, but this kind of statement is rather silly.

7 hours ago [-]
dakiol 12 hours ago [-]
Don't get it. What's the relation between Mitchell being a "better" developer than most of us (and better is always relative, but that's another story) and getting value out of AI? That's like saying Bezos is a way better businessman than you, so you should really hear his tips about becoming a billionaire. No sense (because what works for him probably doesn't work for you)

Tons of respect for Mitchell. I think you are doing him a disservice with these kinds of comments.

tux1968 12 hours ago [-]
Maybe you disagree with it, but it seems like a pretty straightforward argument: A lot of us dismiss AI because "it can't be trusted to do as good a job as me". The OP is arguing that someone, who can do better than most of us, disagrees with this line of thinking. And if we have respect for his abilities, and recognize them as better than our own, we should perhaps re-assess our own rationale in dismissing the utility of AI assistance. If he can get value out of it, surely we can too if we don't argue ourselves out of giving it a fair shake. The flip side of that argument might be that you have to be a much better programmer than most of us are, to properly extract value out of the AI... maybe it's only useful in the hands of a real expert.
bigstrat2003 3 hours ago [-]
No, it doesn't work that way. I don't know if Mitchell is a better programmer than me, but let's say he is for the sake of argument. That doesn't make him a god to whom I must listen. He's just a guy, and he can be wrong about things. I'm glad he's apparently finding value here, but the cold hard reality is that I have tried the tools and they don't provide value to me. And between another practicioner's opinion and my own, I value my own more.
jplusequalt 12 hours ago [-]
>A lot of us dismiss AI because "it can't be trusted to do as good a job as me"

Some of us enjoy learning how systems work, and derive satisfaction from the feeling of doing something hard, and feel that AI removes that satisfaction. If I wanted to have something else write the code, I would focus on becoming a product manager, or a technical lead. But as is, this is a craft, and I very much enjoy the autonomy that comes with being able to use this skill and grow it.

mitchellh 12 hours ago [-]
There is no dichotomy of craft and AI.

I consider myself a craftsman as well. AI gives me the ability to focus on the parts I both enjoy working on and that demand the most craftsmanship. A lot of what I use AI for and show in the blog isn’t coding at all, but a way to allow me to spend more time coding.

This reads like you maybe didn’t read the blog post, so I’ll mention there many examples there.

jplusequalt 12 hours ago [-]
[flagged]
fizx 12 hours ago [-]
I enjoy Japanese joinery, but for some reason the housing market doesn't.
tux1968 12 hours ago [-]
Nobody is trying to talk anyone out of their hobby or artisanal creativeness. A lot of people enjoy walking, even after the invention of the automobile. There's nothing wrong with that, there are even times when it's the much more efficient choice. But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time. And we can assume that's the context and spirit of the OP's argument.
mold_aid 12 hours ago [-]
>Nobody is trying to talk anyone out of their hobby or artisanal creativeness.

Well, yes, they are, some folks don't think "here's how I use AI" and "I'm a craftsman!" are consistent. Seems like maybe OP should consider whether "AI is a tool, why can't you use it right" isn't begging the question.

Is this going to be the new rhetorical trick, to say "oh hey surely we can all agree I have reasonable goals! And to the extent they're reasonable you are unreasonable for not adopting them"?

jplusequalt 12 hours ago [-]
>But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time.

I think one of the more frustrating aspects of this whole debate is this idea that software development pre-AI was too "slow", despite the fact that no other kind of engineering has nearly the same turn around time as software engineering does (nor does they have the same return on investment!).

I just end up rolling my eyes when people use this argument. To me it feels like favoring productivity over everything else.

tux1968 11 hours ago [-]
[flagged]
SpicyLemonZest 3 hours ago [-]
The value Mitchell describes aligns well with the lack of value I'm getting. He feels that guiding an agent through a task is neither faster nor slower than doing it himself, and there's some tasks he doesn't even try to do with an agent because he knows it won't work, but it's easier to parallelize reviewing agentic work than it is to parallelize direct coding work. That's just not a usage pattern that's valuable to me personally - I rarely find myself in a situation where I have large number of well-scoped programming tasks I need to complete, and it's a fun treat to do myself when I do.
mold_aid 12 hours ago [-]
"Why can't you be more like your brother Mitchell?"
xyst 12 hours ago [-]
[flagged]
dang 11 hours ago [-]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

"Don't be snarky."

"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

https://news.ycombinator.com/newsguidelines.html

therein 13 hours ago [-]
[flagged]
dang 12 hours ago [-]
Ok, but please don't post unsubstantive comments to Hacker News.
alterom 12 hours ago [-]
>Underwhelming

Which is why I like this article. It's realistic in terms of describing the value-propositio of LLM-based coding assist tools (aka, AI agents).

The fact that it's underwhelming compared to the hype we see every day is a very, very good sign that it's practical.

stronglikedan 12 hours ago [-]
most AI adoption journeys are
polyrand 11 hours ago [-]
> a period of inefficiency

I think this is something people ignore, and is significant. The only way to get good at coding with LLMs is actually trying to do it. Even if it's inefficient or slower at first. It's just another skill to develop [0].

And it's not really about using all the plugins and features available. In fact, many plugins and features are counter-productive. Just learn how to prompt and steer the LLM better.

[0]: https://ricardoanderegg.com/posts/getting-better-coding-llms...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:58:50 GMT+0000 (Coordinated Universal Time) with Vercel.