That AI would be writing 90% of the code at Anthropic was not a "failed prediction". If we take Anthropic's word for it, now their agents are writing 100% of the code:
Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.
daxfohl 1 hours ago [-]
I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level.
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
mgraczyk 1 minutes ago [-]
Even if you believe that many are too far on one side now, you have to account for the fact that AI will get better rapidly. If you're not using it now you may end up lacking preparation when it becomes more valuable
softwaredoug 16 minutes ago [-]
Even within AI coding how people use this varies wildly from one people trying to one shot apps to people being barely above tab completers.
When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.
I think the best you can do now is try lots of different new ways of working keep an open mind
_se 48 minutes ago [-]
Very reasonable take. The fact that this is being downvoted really shows how poor HN's collective critical thinking has become. Silicon Valley is cannibalizing itself and it's pretty funny to watch from the outside with a clear head.
daxfohl 43 minutes ago [-]
I think it's like the California gold rush. Anybody and their brother can go out and dig, but the real money is in selling the shovels.
runarberg 15 minutes ago [-]
This is basically Pascal’s wager. However, unlike the original Pascal’s wager, yours actually sounds sound.
Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.
jackfranklyn 43 minutes ago [-]
The bit about "we have automated coding, but not software engineering" matches my experience. LLMs are good at writing individual functions but terrible at deciding which functions should exist.
My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
Kerrick 1 hours ago [-]
> However, it is important to ask if you want to stop investing in your own skills because of a speculative prediction made by an AI researcher or tech CEO.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
I'm doing a similar thing. Recently, I got $100 to spend on books. The first two books I got were A Philosophy of Software Design, and Designing Data-Intensive Applications, because I asked myself, out of all the technical and software engineering related books that I might get, given agentic coding works quite well now, what are the most high impact ones?
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
ithkuil 1 hours ago [-]
I personally found out that knowing how to use ai coding assistants productively is a skill like any other and a) it requires a significant investment of time b) can be quite rewarding to learn just as any other skill c) might be useful now or in the future and d) doesn't negate the usefulness of any other skills acquired on the past nor diminishes the usefulness of learning new skills in the future
secbear 8 minutes ago [-]
Agreed, my experience and code quality with claude code and agentic workflows has dramatically increased since investing in learning how to properly use these tools. Ralph Wiggum based approaches and HumanLayer's agents/commands (in their .claude/) have boosted my productivity the most. https://github.com/snwfdhmp/awesome-ralphhttps://github.com/humanlayer
pipes 1 hours ago [-]
On the using AI assistants I find that everything is moving so fast that I feel constantly like "I'm doing this wrong". Is the answer simply "dedicate time to experimenting? I keep hearing "spec driven design" or "Ralph" maybe I should learn those? Genuine thoughts and questions btw.
gnatolf 38 minutes ago [-]
More specifically regarding spec-driven development:
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
gnatolf 46 minutes ago [-]
Everybody feels like this, and I think nobody stays ahead of the curve for a prolonged time. There's just too many wrinkles.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
bobthepanda 50 minutes ago [-]
I think find what works for you, and everything else is kind of noise.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
mattmanser 44 minutes ago [-]
As someone with 20 years experience, DDD is a stupid idea, skip it and do yourself a favour.
You'll probably be forming some counter-arguments in your head.
Skip them, throw the DDD books in the bin, and do your co-workers a favour.
skydhash 2 minutes ago [-]
DDD is quite nice as a philosophy. Like concatenate state based on behavioral similarity and keep mutation and query function close, model data structures from domain concepts and not the inverse, pay attention to domain boundary (an entity may be read only in some domain and have fewer state transition than in another).
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
Trasmatta 34 minutes ago [-]
Agreed. I find most design patterns end up as a mess eventually, at least when followed religiously. DDD being one of the big offenders. They all seem to converge on the same type of "over engineered spaghetti" that LOOKS well factored at a glance, but is incredibly hard to understand or debug in practice.
bikelang 1 hours ago [-]
Of those 3 DDD books - which did you find the most valuable?
pipes 1 hours ago [-]
I was going to ask the same thing. I'm self taught but I've mainly gone the other way, more interested in learning about lower level things. Bang for buck I think I might have been better reading DDD type books.
abcde666777 7 minutes ago [-]
It's astonishing to me that real software developers have considered it a good idea to generate code... and not even look at the code.
I would have thought sanity checking the output to be the most elementary next step.
strawhatguy 11 minutes ago [-]
Speaking just for myself, AI has allowed me to start doing projects that seemed daunting at first, as it automates much of the tedious act of actually typing code from the keyboard, and keeps me at a higher level.
But yes, I usually constrain my plans to one function, or one feature. Too much and it goes haywire.
I think a side benefit is that I think more about the problem itself, rather than the mechanisms of coding.
strawhatguy 9 minutes ago [-]
Actually, I wonder how they measured the 'speed' of coding, maybe I missed it. But if developers can spend more time thinking about the larger problems, that may be a cause of the slowdown. I guess it remains to be seen if the code quality or feature set improves.
theYipster 1 hours ago [-]
Just because you’re a good programmer / software engineer doesn’t mean you’re a good architect, or a good UI designer, or a good product manager. Yet in my experience, using LLMs to successfully produce software really works those architect, designer, and manager muscles, and thus requires them to be strong.
LPisGood 52 minutes ago [-]
I really disagree with this. I don’t think you can be a good software engineer without being a good product manager and a good architect.
nkmnz 15 minutes ago [-]
> A study from METR found that when developers used AI tools, they estimated that they were working 20% faster, yet in reality they worked 19% slower. That is nearly a 40% difference between perceived and actual times!
It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.
piker 12 minutes ago [-]
I get caught up personally in this math as well. Is a charitable interpretation of the throwaway line that they were off by that many “percentage points”?
regular_trash 12 minutes ago [-]
Can you elaborate? This seems like a simple mistake if they are incorrect, I'm not sure where 33% or 50% come from here.
softwaredoug 11 minutes ago [-]
Isn't the study a year old by now? Things have evolved very quickly in the last few months.
tjr 1 hours ago [-]
I see AI coding as something like project management. You could delegate all of the tasks to an LLM, or you could assign some to yourself.
If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.
written-beyond 48 minutes ago [-]
You need to understand the frustration behind these kinds of posts.
The people on the start of the curve are the ones who swear against LLMs for engineering, and are the loudest in the comments.
The people on the end of the curve are the ones who spam about only vibing, not looking at code and are attempting to build this new expectation for the new interaction layer for software to be LLM exclusively. These ones are the loudest on posts/blogs.
The ones in the middle are people who accept using LLMs as a tool, and like with all tools they exercise restraint and caution. Because waiting 5 to 10 seconds each time for an LLM to change the color of your font, and getting it wrong is slower than just changing it yourself. You might as well just go in and do these tiny adjustments yourself.
It's the engineers at both ends that have made me lose my will to live.
samename 19 minutes ago [-]
The addiction aspect of this is real. I was skeptical at first, but this past week I built three apps and experienced issues with stepping away or getting enough sleep. Eventually my discipline kicked in to make this a more healthy habit, but I was surprised by how compelling it is to turn ideas into working prototypes instantly. Ironically, the rate limits on my Claude and Codex subscriptions helped me to pace myself.
logicprog 3 minutes ago [-]
Isn't struggling to get enough sleep or shower enough and so on because you're so involved with the process of, you know, programming, especially interactive, exploratory programming with an immediate feedback loop, kind of a known phenomenon for programmers since essentially the dawn of interactive computing?
altcunn 1 hours ago [-]
The point about vibe coding eroding fundamentals resonates. I've noticed that when I lean too heavily on LLM-generated code, I stop thinking about edge cases and error handling — the model optimizes for the happy path and so do I. The real skill shift isn't coding vs not coding, it's learning to be a better reviewer and architect of code you didn't write yourself.
fnordpiglet 1 hours ago [-]
Fascinating - I find the opposite is true. I think of edge cases more and direct the exploration of them. I’ve found my 35 years experience tells me where the gaps will be and I’m usually right. I’ve been able to build much more complex software than before not because I didn’t know how but because as one person I couldn’t possibly do it. The process isn’t any easier just faster.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
thehamkercat 30 minutes ago [-]
> when I lean too heavily on LLM-generated code, I stop thinking about edge cases and error handling
I have the exact same experience... if you don't use it, you'll lose it
36 minutes ago [-]
lazystar 29 minutes ago [-]
i used to lose hours each day to typos, linting issues, bracket-instead-of-curly-bracket, 'was it the first parameter or the second parameter', looking up accumulator/anonymous function callback syntax AGAIN...
idk what ya'll are doing with AI, and i dont really care. i can finally - fiiinally - stay focused on the problem im trying to solve for more than 5 minutes.
ozim 18 minutes ago [-]
idk what you’re doing but proper IDE was doing that for me for past 15 years or more.
Like I don’t remember syntax or linting or typos being a problem since I was in high school doing Turbo Pascal or Visual Basic.
mathgladiator 2 hours ago [-]
Ive come to the realization after maxing the x20 plan that I have to set clear priorities.
Fortunately, I've retired so I'm going focus on flooding the zone with my crazy ideas made manifest in books.
nkmnz 9 minutes ago [-]
tl;dr - author cites a study from early 2025 which measured developer speed of “experienced open source developers” to be ~20% slower when supported by AI, while they’ve estimated to be ~20% faster.
Note: the study used sonnet-3.5 and sonnet-3.7; there weren’t any agents, deep research or similar tools available. I’d like to see this study done again with:
1. juniors ans mid-level engineers
2. opus-4.6 high and codex-5.2 xhigh
3. Tasks that require upfront research
4. Tasks that require stakeholder communication, which can be facilitated by AI
cmrdporcupine 48 minutes ago [-]
"they don’t produce useful layers of abstraction nor meaningful modularization. They don’t value conciseness or improving organization in a large code base. We have automated coding, but not software engineering"
Which frankly describes pretty much all real world commercial software projects I've been on, too.
Software engineering hasn't happened yet. Agents produce big balls of mud because we do, too.
Barrin92 37 minutes ago [-]
which is why the most famous book in the world of software development pointed out that the long term success of a software project is not defined by man hours or lines of code written but by documentation, clear interfaces and the capacity to manage the complexity of a project.
Maybe they need to start handing out copies of the mythical man month again because people seem to be oblivious to insights we already had a few decades ago
Rendered at 23:13:59 GMT+0000 (Coordinated Universal Time) with Vercel.
https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...
Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level.
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.
I think the best you can do now is try lots of different new ways of working keep an open mind
Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.
My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
You can choose to grow in both areas.
[0]: https://kerrick.blog/articles/2025/kerricks-wager/
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
[2]: https://news.ycombinator.com/item?id=46702093
[3]: https://news.ycombinator.com/item?id=46719500
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
You'll probably be forming some counter-arguments in your head.
Skip them, throw the DDD books in the bin, and do your co-workers a favour.
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
I would have thought sanity checking the output to be the most elementary next step.
But yes, I usually constrain my plans to one function, or one feature. Too much and it goes haywire.
I think a side benefit is that I think more about the problem itself, rather than the mechanisms of coding.
It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.
If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.
The people on the start of the curve are the ones who swear against LLMs for engineering, and are the loudest in the comments.
The people on the end of the curve are the ones who spam about only vibing, not looking at code and are attempting to build this new expectation for the new interaction layer for software to be LLM exclusively. These ones are the loudest on posts/blogs.
The ones in the middle are people who accept using LLMs as a tool, and like with all tools they exercise restraint and caution. Because waiting 5 to 10 seconds each time for an LLM to change the color of your font, and getting it wrong is slower than just changing it yourself. You might as well just go in and do these tiny adjustments yourself.
It's the engineers at both ends that have made me lose my will to live.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
I have the exact same experience... if you don't use it, you'll lose it
idk what ya'll are doing with AI, and i dont really care. i can finally - fiiinally - stay focused on the problem im trying to solve for more than 5 minutes.
Like I don’t remember syntax or linting or typos being a problem since I was in high school doing Turbo Pascal or Visual Basic.
Fortunately, I've retired so I'm going focus on flooding the zone with my crazy ideas made manifest in books.
Note: the study used sonnet-3.5 and sonnet-3.7; there weren’t any agents, deep research or similar tools available. I’d like to see this study done again with:
1. juniors ans mid-level engineers
2. opus-4.6 high and codex-5.2 xhigh
3. Tasks that require upfront research
4. Tasks that require stakeholder communication, which can be facilitated by AI
Which frankly describes pretty much all real world commercial software projects I've been on, too.
Software engineering hasn't happened yet. Agents produce big balls of mud because we do, too.
Maybe they need to start handing out copies of the mythical man month again because people seem to be oblivious to insights we already had a few decades ago