NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Nobody knows how the whole system works (surfingcomplexity.blog)
rorylaitila 7 hours ago [-]
There are many layers to this. But there is one style of programming that concerns me. Where you neither understand the layer above you (why the product exists and what the goal of the system is) nor the layer below (how to actually implement the behavior). In the past, many developers barely understood the business case, but at least they understood how to translate into code, and could put backpressure on the business. Now however, it's apparently not even necessary to know how the code works!

The argument seems to be, we should float on a thin lubricant of "that's someone else's concern" (either the AI or the PMs) gliding blissfully from one ticket to another. Neither grasping our goal nor our outcome. If the tests are green and the buttons submit, mission accomplished!

Using Claude I can feel my situational awareness slipping from my grasp. It's increasingly clear that this style of development pushes you to stop looking at any of the code at all. My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?

scottLobster 6 hours ago [-]
The irony is "ownership" is a common management talking point, but when you actually try to take ownership you inevitably run into walls of access, a lack of information, and generally a "why are you here?" mentality.

Granted one person can't know/do everything, but large companies in particular seem allergic to granting you any visibility whatsoever. It's particularly annoying when you're given a deadline, bust your ass working overtime to make it, only to discover that said deadline got extended at a meeting you weren't invited to and nobody thought to tell you about it. Or worse, they were doing some dark management technique of "well he's really hauling ass right now, if he makes the original deadline we'll be ahead of schedule, and if he doesn't we have the spare capacity".

If the expectation is I'm a tool for management to use, then I'll perform my duties to the letter and no further. If the expectation is ownership, then I need to at least sit at the cool kids' table and maybe even occasionally speak when I have something relevant to contribute.

lazystar 5 hours ago [-]
> Granted one person can't know/do everything,

watch me try, at least.

> but large companies in particular seem allergic to granting you any visibility whatsoever. It's particularly annoying

If the blind spot is directly causing customer pain, find metrics that demonstrate the impact. If it ends up driving away your customers, then your company is securing itself to death.

robocat 3 hours ago [-]
> customer pain > driving away your customers > company death

You are implying efficient market theory, which is bunk.

Example: Our banks have endless painful papercuts yet most of us don't change banks just because of one pain.

We each respond to our own complex of costs and benefits (or risks versus rewards).

Second example: I use an iPhone because I judge it to be more secure yet I'm constantly fighting the same bugs and misfeatures that seem to never get fixed/improved.

Your chain of reasoning is broken? Or is it your model of the world?

order-matters 2 hours ago [-]
>Or worse, they were doing some dark management technique of "well he's really hauling ass right now, if he makes the original deadline we'll be ahead of schedule, and if he doesn't we have the spare capacity"

As a business analyst who has worked a lot with executive teams at multiple companies, this is almost always the case (ime). Deadlines are only shortened down the chain, never extended. The assumption is that if it cannot be done then they will simply not administer any consequences and classify it as "not realizing the upside".

The only reason it is almost always and not always, is because sometimes a different thing pops up that needs to get prioritized first, so it is communicated that the first thing isnt actually as important as it was yesterday and this other thing is now the most important.

Now obviously I cant speak for everyone at all teams, but as far as boring corporate default behavior goes this is the safe path for executives. If your boss is doing otherwise, they are going out of their way to do it.

The takeaway as a worker is that you should not treat any business goal or deadline ask of you with the same level of care as you would a personal favor. When something really needs that level of care, your boss should pull you aside and break character and make it a personal favor to them, not the business.

As far as "Ownership" goes, it is just a pissing contest as far as I can tell. if you own a task but cant do the task, you just send an email to someone who can do the task so that the task gets done and you can report the task is done and get your ownership credit. the person who did the task was used as a tool in this regard. So high performing managers just try to get ownership of as much as they possibly can, as there's no meaningful difference between who sends that email.

toyg 5 hours ago [-]
Ownership doesn't imply FULL ownership. You get handed ownership of a slice, and expected to be responsible for that bit of land; but you'll never own the farm, and will likely never be consulted on whether that land should become a car park from Tuesday. That's just how capitalism work.
Jtsummers 4 hours ago [-]
> and will likely never be consulted on whether that land should become a car park from Tuesday.

Then you don't have ownership. What you have is responsibility without ownership or authority if this rug pull can be performed.

scottLobster 3 hours ago [-]
That's called renting. And even renters have rights per whatever lease they signed and local laws.

It's a simple formula. If you want me to be personally invested in my work and go above and beyond, then I need the motivation to do that. So either you grant me a reasonable level of professional input such that my opinion is valued and I'm helping the mission succeed, or pay me for said extra effort (can be opportunities for promotion, direct overtime pay, career advancement, etc). If you want me super-motivated you can even do both!

If we're playing hardball "you're some lowly IC nerd without an MBA or connections and we're here to make money so fuck you" capitalism, well the only serious leverage I have to is to take my talents where they're most appreciated. So you'll get exactly what you pay for until I find something better, and aside from some professional courtesy I'll be looking. Maybe you're fine with that, but if you start preaching "ownership" of the product just be aware that the entire dev team is going to pay you lip service and then laugh as soon as you're out of the room, and we clock out at 5:00, even if we don't on paper. Except for poor Bob who due to life/family commitments has no option to leave and needs to rationalize his situation even though he agrees with us. Sometimes we'll tone it down just so he doesn't feel too bad about being trapped. Regardless, in that environment we take ownership of our careers, not our work.

I've worked both types of jobs. I'd say the former worked the best for all involved, but the latter has its place and is fine so long as everyone acknowledges what game we're playing and expectations are set appropriately.

cortesoft 5 hours ago [-]
Strangely, I feel that using Claude helps me stay MORE focused on what I am actually trying to accomplish.

In the prior 30 years of my programming life, so much time was spent "yak shaving"... setting up all the boilerplate, adding basic functionality you always have to do, setting up support systems, etc. With Claude, all of those things are so quick to complete that I can stay focused on what I am actually trying to do, and can therefore keep more of the core functionality I am caring about in my head. I don't have to push the core, novel, parts of my work aside to do the parts that are the same across other projects.

Insanity 3 hours ago [-]
But apart from side projects these true new setups happen rarely. When working at a company you probably work on an already established codebase with known patterns.

So what you say is true about boilerplate reduction, but that’s not a huge ROI for enterprise software.

(Some exceptions apply, there’s always some setup work for a new microservice etc. But even those don’t happen weekly or even monthly)

konschubert 58 minutes ago [-]
I don't know. Today I had something break because of a uv update on a very legacy piece of code.

(Not complaining - it was a good update that revealed a bug in our code.)

I really don't care to much any more to learn about the histories of python packaging. Claude fixed it for me and that was it.

zen4ttitude 2 hours ago [-]
You sit on the arbitrating layer. It sounds like you have some programming experience as you miss looking at the code. I did check a lot of code and got tired of how perfect it were, generated by AI. Latest Claude is insane, does not even make errors. You still have to guide it and it will occasionally go astray and make rookie mistakes. If you extrapolate to 6 months down the road, one Claude API = a team of 10 programmers. None of them sloppy and undocumented. So if one wants to remain pertinent in this new economy of infinite coding, one should go back to project management, learn best practice and software fundamentals. I think there will be a lot of demand for ex-programmers able to steer AI in the most efficient stack, the best architecture and the most optimised deployment. Notions of recurring cost, maintenance and security will help for sure.
56 minutes ago [-]
internet2000 1 hours ago [-]
You should absolutely try your hardest to learn the layer above you. If your organization won't volunteer the info easily that's unfortunate, but you definitely have to try.
ChrisMarshallNY 4 hours ago [-]
Well, to be fair, we've already been there, for many years (dependency hell). In those cases, LLMs are likely to actually improve things.

For myself, I like to know what's going on, to a certain extent, but appreciate abstraction.

I am also aware that people like me, probably don't make commercial sense, but that's already been the case for quite some time.

ffsm8 6 hours ago [-]
I am still missing something like Claude code that's less "hands-off" and optimizes for small edits instead of full feature development

Like you're sitting in your ide, select a few rows, press (for example) caps lock to activate speech and then just say a short line what it should adjust or similar - which is then staged for the next adjustments to be done with the same UX

Like saying "okay, I need a new usecase here, let's start by making a function to do y. [Function appears] great, we need to wire with object into it [point at class] [LLM backtracking code path via language server until it finds it and passes things through]

The main blocking issue to that UX would likely be the speed of the response, as the transcription would be pretty much instant, but the coding prompt after would still take a few moments to be good... And such an interactive approach would feel a lot better with speed.

Too bad nobody seems to target the combined mouse+voice control for LLMs yet. It would even double as a fantastic accessibility tool for people suffering from various typing related issues

devin 4 hours ago [-]
The level of exposition required for a lot of edits you might want to make is what stops this from being a primary method of interaction. If I have to express >= AND <= AND NOT == OR ... then I may as well write the thing myself.
nsm 6 hours ago [-]
Aider has an ide mode close to this. Check out https://nikhilism.com/post/2026/nudge-skill/ to add similar behavior to certain agents. I too, am waiting for IDEs to do this in a polished way. next tab edit is not quite it
jmalicki 2 hours ago [-]
In cursor you highlight and hit Ctrl-L, and use the voice prompting - I can do this today!
raw_anon_1111 6 hours ago [-]
The average tenure of a developer for the longest was 2.5 years not to mention the developer changing teams, even before AI many developers didn’t know how the code they were brought in to maintain works.

> My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?

When you use Claude code, tell it to keep a markdown file updated with the what and the why. Instead of just “Do $y”, “Because of $x I need to do $y”. If it is updated in the markdown file, it will be recorded and sometime the agent will come up with code and mske changes that are correct. But use cases you didn’t think about. You can then even ask it “why did it do $x” that you weren’t expecting but oh yeah, it was right.

> Why should I exist?

That’s the wrong question, the correct question is “why is my employer paying me?”. Your employer is paying you to turn well defined requirements into working code to either make them money or to save them money if (the royal) you are a mid level ticket taker. If someone is working at that level, that’s what they are regardless of title.

No one cares if either you or the LLM decided to use a for loop or a while loop.

At higher levels you are responsible for taking your $n number of years of experience to turn more ambiguous, more impactful, larger scoped projects into working implementations that are done on time, on budget and meets requirements. Before LLMs, that meant a combination of my own coding, putting a team together and delegating and telling my director/CTO that this isn’t something we should be doing in house (ie a Salesforce or Workday integration) at all.

Now add to the mix between all those resources - a coding agent. In either case, I as anything above ticket taker, probably haven’t looked at a line of code first. I test for does it meet the functional and non functional requirements and then mostly look at the hot spots - concurrency issues, security issue, and are there any scalability issues that are obvious before I hammer it with real world like traffic - web request or transactions for an ETL job.

And before the pearl clutching starts, I started programming as a hobby in the 80s in assembly and spent the first decade and a half of my career doing C bit twiddling on multiple mainframes, PCs, and later Windows CE devices.

beej71 5 hours ago [-]
>At higher levels you are responsible for taking your $n number of years of experience to turn more ambiguous, more impactful, larger scoped projects into working implementations that are done on time, on budget and meets requirements.

Is this not a job for LLMs, though?

raw_anon_1111 5 hours ago [-]
LLMs are good at turning well defined requirements to code.

But even now it’s struggling on a project to understand the correlation between “It is creating Lambda code to do $x meaning it needs to change the corresponding IAM role in CloudFormation to give it permission it needs”

Spivak 5 hours ago [-]
The LLMs are fantastic at writing terraform when you tell it what to do which is a huge timesaver, but good heavens is it terrible at actually knowing what pieces need to be wired up for anything but the simplest cases. Job security for now I guess?
raw_anon_1111 4 hours ago [-]
I was able to one shot CDK, Terraform and CloudFormation on my last three projects respectively (different clients - different IAC). But I was really detailed about everything I needed and I fed ChatGPT the diagram.

I guess I could be more detailed in the prompt/md files about every time it changes lambda code, check the permissions in the corresponding IAC and check to see if a new VPC endpoint is needed.

anal_reactor 6 hours ago [-]
In an ideal scenario, you want your employees to be fungible. You don't want any irreplaceable individuals who hold the entire organization hostage. One way of defending yourself from such individuals is ensuring that all members have enough knowledge to take other people's roles, at least after some brief training. The problem is, maintaining high level of competency and transparency is very expensive. The other solution is when your organization is a complete mess and nobody knows what they're doing anyway. Yes, this results in your organization being inefficient, but this inefficiency might actually be cheap in the grand scheme of things.
6 hours ago [-]
planb 6 hours ago [-]
This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.

Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.

ragall 1 hours ago [-]
> Now you could say a piece of software completely written by a coding agent is just another abstraction

You're almost there. The current code-generating LLMs will be a dead end because it takes more time to thoroughly review a piece of code than to generate it, especially because LLM code is needlessly verbose.

The solution is to abandon general-purpose languages and start encapsulating the abstraction behind a DSL, which is orders of magnitude more restricted and thus simpler than a general-purpose language, making it much more amenable to be controlled through an LLM. SaaS companies should go from API-first to DSL-first, in many cases more than one DSL: e.g. a blog-hosting company would have one DSL for the page layouts, one for controlling edits and publishing, one for asset manipulation pipelines, one for controlling the CDN, etc... Sort of IaC, you define a desired outcome, and the engine behind takes care of actuating it.

mamcx 6 hours ago [-]
> This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

The big problem is that now exist an actual risk most will never be able to MAKE abstractions. Sure, lets be on the shoulders of the giants but before IA most do some extra work and flex their brains.

Everyone make abstractions, and hide the "accidental complexity" for my current task is good, but I should deal with the "necessary complexity" to say I have, actually, done a job.

If is only being a dumb pipe...

Quothling 25 minutes ago [-]
To be fair to AI, it's not like Clean Code and it's OOP cult weren't already causing L1-3 cache misses by every abstraction and how they spread their functions out over multiple files. I'm not sure AI can really make it worse than that, and it's been a golden standard in a lot of places for 25 years. For the most part it doesn't matter, in most software it'll cost you a little extra on compute but rarely noticible. If you're writing software for something important though, like one of those astractions you talk about, then it's going to travel through everything. Making it even more important to actually know what you're building upon.

Still, I'm not convinced AI is necessarily worse at reading the documentation and using the abstractions correctly than the programmers using the AI. If you don't know what you're doing, then does it matter if you utilise an AI instead of google programming?

matheus-rr 6 hours ago [-]
The dependency tree is where this bites hardest in practice. A typical Node.js project pulls in 800+ transitive dependencies, each with their own release cadence and breaking change policies. Nobody on your team understands how most of them work internally, and that's fine - until one of them ships a breaking change, deprecates an API, or hits end-of-life.

The anon291 comment about interface stability is exactly right. The reason you don't need to understand CPU microarchitecture is that x86 instructions from 1990 still work. Your React component library from 2023 might not survive the next major version. The "nobody knows how the whole system works" problem is manageable when the interfaces are stable and well-documented. It becomes genuinely dangerous when the interfaces themselves are churning.

What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to. The knowledge gap isn't just "how does this work" - it's "is this thing I depend on still actively maintained, and what changed in the last 3 releases that I skipped?" That's the operational version of this problem that bites people every week.

pixl97 4 hours ago [-]
>What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to

I mean hopefully they are outsourcing it to some kind of SBOM/SCA type tool that monitors this.

With this said, I've seen a lot of projects before AI started touching anything stuck in this old dependency hell were they couldn't really get new versions integrated without causing hundreds of other problems leading to a cascade of failures.

virgilp 14 hours ago [-]
That's not how things work in practice.

I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.

Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.

ahnick 14 hours ago [-]
Most people have no idea how to hunt, make a fire, or grow food. If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though, because there are so many grocery stores and restaurants and supply chains source from multiple areas that the redundant and decentralized nature makes it not a problem. Thus it is the same with making your own food. Eventually if you have enough robots or food replicators around knowing how to make food becomes irrelevant, because you always will be able to find one even if yours is broken. (Note: we are not there yet)
sciencejerk 13 hours ago [-]
>If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though...

I fail to see how this isn't a problem? Grid failures happen? So do wars and natural disasters which can cause grids and supply chains to fail.

ahnick 13 hours ago [-]
That is short hand. The problem exists of course, but it is improbable that it will actually occur in our lifetimes. An asteroid could slam into the earth or a gamma ray burst could sanitize the planet of all life. We could also experience nuclear war. These are problems that exist, yet we all just blissfully go on about our lives, b/c there is basically nothing that can be done to stop these things if they do happen and they likely won't. Basically we should only worry about these problems in so much as we as a species are able to actually do something about them.
pixl97 7 hours ago [-]
If they are at small scale then it's fine.

If it's at large scale then millions die of starvation.

xorcist 9 hours ago [-]
> Most people have no idea how to hunt, make a fire, or grow food

That's a bizarre claim, confidently stated.

Of course I can make a fire, cook and my own food. You can, too. When it comes to hunting, skinning and the cutting of animals, that takes a bit more practice but anyone can manage something even if the result isn't pretty.

If stores ran out of food we would have devastating problems but because of specialization, just because we live in cities now you simply can't go out hunting even if you wanted to. Plus there is probably much more pressing problems to take care of, such as the lack of water and fuel.

If most people actually couldn't cook their own food, should they need, that would be a huge problem. Which makes the comparison with IT apt.

sceptic123 8 hours ago [-]
I don't think they're saying _you_ can't do those things, just that most people can't which I have to agree with.

They're not saying people can't learn those things either, but that's the practice you're talking about here. The real question is, can you learn to do it before you starve or freeze to death? Or perhaps poison yourself because you ate something you shouldn't or cooked it badly.

pixl97 7 hours ago [-]
Can you list a situation where this matters that you know this personally?

Maybe if you end up alone and lost in a huge forest or the Outback, but this is a highly unlikely scenario.

If society falls apart cooking isn’t something you need to be that worried about unless you survive the first few weeks. Getting people to work together with different skills is going to be far more beneficial.

sceptic123 4 hours ago [-]
The existential crisis part for me is that no-one (or not enough people) have the skills or knowledge required to do these things. Getting people to work together only works if some people have those skills to begin with.

I also wasn't putting the focus is on cooking, the ability to hunt/gather/grow enough food and keep yourself warm are far more important.

And you are far more optimistic about people than me if you think people working together is the likely scenario here.

pixl97 3 hours ago [-]
>the ability to hunt/gather/grow enough food and keep yourself warm are far more important

These are very important when you're alone. Like deep in the woods with a tiny group maybe.

The kinds of problems you'll actually see are something going bad and there being a lot of people around trying to survive on ever decreasing resources. A single person out of 100 can teach people how to cook, or hunt, or grow crops.

If things are that bad then there is nearly a zero percent change that any of those, other than maybe clean water, are going to be your biggest issue. People that do form groups and don't care about committing acts of violence are going to take everything you have and leave you for dead if not just outright kill you. You will have to have a big enough group to defend your holdings 24/7 with the ability to take some losses.

Simply put there is not enough room on the planet for hunter gathers and 8 billion people. That number has to fall down to the 1 billion or so range pretty quickly, like we saw around the 1900s.

robocat 2 hours ago [-]
The well known SHTF story that summarises your point written by a guy who lived in Sarajevo:

https://www.scribd.com/document/110974061/Selco-s-Survival

From a real situation, only alluding to the true horrors of the situation.

cucumber3732842 7 hours ago [-]
> The real question is, can you learn to do it before you starve or freeze to death? Or perhaps poison yourself because you ate something you shouldn't or cooked it badly.

You can eat some real terrible stuff and like 99.999% of the time only get the shits, which isn't really a concern if you have good access to clean drinking water and can stay hydrated.

The overwhelming majority of people probably would figure it out even if they wind up eating a lot of questionable stuff in the first month and productivity in other areas would dedicate more resources to it.

sceptic123 4 hours ago [-]
You're not going to be any good for hunting, farming or keeping warm if you have the shits though.
stetrain 4 hours ago [-]
You think that the majority of people actually know how to do those things successfully in the absence of modern logistics or looking up how to do it online?

I have a general idea of how those things work, but successfully hunting an animal isn't something I have ever done or have the tools (and training on those tools) to accomplish.

Which crops can I grow in my climate zone to actually feed my family, and where would I get seeds and supplies to do so? Again I might have some general ideas here but not specifics about how to be successful given short notice.

I might successfully get a squirrel or two, or get a few plants to grow, but the result is still likely starvation for myself and my family if we were to attempt full self-reliance in those areas without preparation.

In the same way that I have a general idea of how CPU registers, cache, and instructions work but couldn't actually produce a working assembly program without reference materials.

bethekidyouwant 3 hours ago [-]
I mean before you stave to death because you don’t have food in your granary from last year, you don’t even have the land to hunt or plant food so it’s not even relevant
stronglikedan 5 hours ago [-]
> Most people have no idea how to hunt, make a fire, or grow food. If all grocery stores and restaurants run out of food for a long enough time people will starve.

I doubt people would starve. It's trivial to figure out the hunting and fire part in enough time that that won't happen. That said, I think a lot of people will die, but it will be as a result of competition for resources.

Legend2440 5 hours ago [-]
People would absolutely starve, especially in the cities.

It’s just not possible to feed 8 billion people without the industrial system of agriculture and food distribution. There aren’t enough wild animals to hunt.

shevy-java 13 hours ago [-]
In Star Trek they just 3D printed everything via light.
idiotsecant 7 hours ago [-]
Ok, poof. Now everyone knows how to hunt, farm, and cook.

What problem does this solve? In the event of breakdown of society there is nowhere near enough game or arable land near, for example, New York City to prevent mass starvation if the supply chain breaks down totally.

This is a common prepper trope, but it doesn't make any sense.

The actual valuable skill is trade connections and community. A group of people you know and trust, and the ability to reach out and form mini supply chains.

stetrain 4 hours ago [-]
I don't think that comment is advocating for most people to be able to do these things or stating that this is a problem.

In fact it says "This isn't a problem in practice though"

skeptic_ai 13 hours ago [-]
At what point is the threshold between fine and concerning? Seems like the one you put is from your point of view. I’m sure not everyone would agree and is subjective.
lijok 12 hours ago [-]
> that's a whole different level of ignorance, that's much more dangerous.

Why? Is it more dangerous to not know how to fry an egg in a teflon pan, or on a stone over a wood fire? Is it acceptable to know the former but not the latter? Do I need to understand materials science so I can understand how to make something nonstick so I’m not dependant on teflon vendors?

virgilp 10 hours ago [-]
It's relative, not absolute. It's definitely more dangerous to not know how to make your own food than to know something about it - you _need_ food, so lacking that skill is more dangerous than having it.

That was my point, really - that you probably don't need to know "materials science" to declare yourself competent enough in cooking so that you can make your own food. Even if you only cooked eggs in teflon pans, you will likely be able to improvise if need arises. But once you become so ignorant that you don't even know what food is unless you see it on a plate in a restaurant, already prepared - then you're in a lot poorer position to survive, should your access to restaurants be suddenly restricted. But perhaps more importantly - you lose the ability to evaluate food by anything other than aspect & taste, and have to completely rely on others to understand what food might be good or bad for you(*).

(*) even now, you can't really "do your own research", that's not how the world works. We stand on shoulders of giants - the reason we have so much is because we trust/take for granted a lot of knowledge that ancestors built up for us. But it's one thing to know /prove everything in detail up until the basic axioms/atoms/etc; nobody does that. And it's a completely different different thing to have your "thoughts" and "conclusions" already delivered to you in final form by something (be it Fox News, ChatGPT, New York Times or anything really) and just take them for granted, without having a framework that allows to do some minimal "understanding" and "critical thinking" of your own.

stoneforger 12 hours ago [-]
You do need to be able to understand nonstick coating is unhealthy and not magic. You do need to understand your options for pan frying for not sticking are a film of water or an ice cube if you don't want to add an oil into the mix. Then it really depends what you are cooking on how sticky it will be and what the end product will look like. That's why there are people that can't fry an egg, people that cook, chefs, and Michelin chefs. Because nuance matters, it's just that the domain where each person wants to apply it is different. I dont care about nuance in hockey picks but probably some people do. But some domains should concern everyone.
pixl97 3 hours ago [-]
>You do need to be able to understand nonstick coating is unhealthy and not magic

Will it kill you faster than you can birth and raise the next generation?

If it's something that kills you at 50 or 60, then really it doesn't matter that much as evolution expects you to be a grandparent by then.

sgarland 5 hours ago [-]
> “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? [Paraphrasing]: interrupts, 802.11ax modulation scheme, QAM, memory models, garbage collection, field effect transistors...

To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).

Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.

When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.

0: https://en.wikipedia.org/wiki/VMEbus

wrs 1 hours ago [-]
I had the same reaction — yes, in fact I can at least explain all of those examples at an impromptu whiteboard talk level, and for many of them I can go a lot deeper.

But I hate not knowing how things work, and I have a pretty good memory, so I’m probably an outlier.

agumonkey 5 hours ago [-]
There's various degrees of understanding, for instance as a web dev, you know the browser, the osi network stack.. (in theory, there are a lot of tweaks) then maybe the electronics.. but the radio / wireless part is another world in itself with a totally different mindset (analog waves) which make the rabbithole way too long (and wide.. radio is a big world on its own)
wrs 1 hours ago [-]
Well, pixels on a screen are a totally different mindset from network protocols or program control flow, but nobody’s surprised when one person can work within all of those. Brains are big. So yeah, it’s just a matter of degree. (It’s the T-shaped vs I-shaped career thing.)
cadr 4 hours ago [-]
Amateur radio is pretty approachable and has lots of opportunity to go down those rabbit holes.
wduquette 3 hours ago [-]
It's certainly the case that I don't always know how the layer below works, i.e., how the compiled code executes in detail. But I have a mental model that's good enough that I can use the compiler, and I trust that the compiler authors know what they are doing and that the result is well-tested. Over forty years and a slew of different languages I've found that to be an excellent bet.

But I understand how my code works. There's a huge difference between not understanding the layer below and not understanding the layer that I am responsible for.

B56b 2 hours ago [-]
That's the critical difference. You could always find some person who understood a particular piece of a complex puzzle. It's a very new, worrying thing to have pieces that no one understands.
bjt 13 hours ago [-]
The claimed connections here fall apart for me pretty quickly.

CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.

mojuba 12 hours ago [-]
> AI will make this situation worse.

Being an AI skeptic more than not, I don't think the article's conclusion is true.

What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.

Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.

LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.

LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.

wrs 1 hours ago [-]
Very important distinction here you’re missing: they don’t know things, they generate plausible things. The better the training, the more those are similar, but they never converge to identity. It’s like if you asked me to explain the S3 API, and I’m not allowed to say “I don’t know”, I’m going to get pretty close, but you won’t know what I got wrong until you read the docs.

The ability for LLMs to search out the real docs on something and digest them is the fix for this, but don’t start thinking you (and the LLM) don’t need the real docs anymore.

That said, it’s always been a human engineer superpower to know just enough about everything to know what you need to look up, and LLMs are already pretty darn good at that, which I think is your real point.

PandaStyle 13 hours ago [-]
Perhaps a dose of pragmatism is needed here?

I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."

I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc

Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.

I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)

To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.

I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.

finnthehuman 7 hours ago [-]
Graybeards love to yap. Just talk to them or consume the wide amount of material already out there.

It takes curiosity on your part though. Handwaving about practical concerns taking priority is a path to never getting around to it. "Pragmatism" towards skills is how managers wind up with an overspecialized team and then tell themselves it was inevitable. The same can happen to you.

sgarland 6 hours ago [-]
Nearly every time I attempt to tell people how much understanding fundamentals matters, it’s dismissed as being unnecessary knowledge.

I can’t make anyone want to know how things work, and it’s getting tiring being continuously told “no” when I ask.

supriyo-biswas 11 hours ago [-]
There are so many resources, for example, https://cpu.land.
s5300 13 hours ago [-]
[dead]
analog31 6 hours ago [-]
Granted I'm not a software developer, so the things I work on tend to be simpler. But the people I know who are recognized for "knowing how the whole thing works" are likely to have earned that distinction, not necessarily by actually knowing how it works but:

1. The ability and interest to investigate things and find out how they work, when needed or desired. They are interested in how things work. They are probably competent in things that are "glue" in their disciplines, such as math and physics in my case.

2. The ability to improvise an answer when needed, by interpolating across gaps in knowledge, well enough to get past whatever problem is being solved. And to decide when something doesn't need to be understood.

9rx 2 hours ago [-]
> Granted I'm not a software developer, so the things I work on tend to be simpler.

Intriguing statement. I've worked in a number of disciplines over the years and software, by far, presents the simplest things of all.

latexr 6 hours ago [-]
> This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.

That doesn’t make it OK. This is like being stuck in a room whose pillars are starting to deteriorate, then someone comes along with a sledgehammer and starts hitting them and your reaction is to shrug and say “ah, well, the situation is bad and will only get worse, but the roof hasn’t fallen on our heads yet so let’s do nothing”.

If the situation is untenable, the right course of action is to try to correct it, not shrug it off.

overgard 20 minutes ago [-]
Leaky abstractions have always been a problem. Sometimes people like to use them as an example of "see, you didn't understand the assembly, so why do you care about... X". The logic seems to be "see, almost all your abstractions are leaky, why do you care that you don't understand what's happening?"

A few comments on that. First off, the best programmers I've worked with recognized when their abstractions were leaky, and made efforts to understand the thing that was being abstracted. That's a huge part of what made them good! I have worked with programmers that looked at the disassembly, and cared about it. Not everyone needs to do that, but acting like it's a completely pointless exercise does not track with reality.

The other thing I've noticed personally for myself is my biggest growth as a programmer has almost aways come from moving down the stack and understanding things at a lower level, not moving up the stack. Even though I rarely use it, learning assembler was VERY important for my development as a programmer, it helped me understand decisions made in the design of C for instance. I also learned VHDL to program FPGAs and took an embedded systems course that talked about building logic out of NAND gates. I had to write a game for an FPGA in C that had to use a wonky VGA driver that had to treat an 800x600 screen as a series of tiles because there wasn't nearly enough RAM to store that framebuffer. None of this is something I use daily, some of it I may never use again, but it shaped how I think and work with computers. In my experience, the guys that only focus on the highest levels of abstractions because the rest of the stuff "doesn't matter" easily get themselves stuck in corners they can't get out of.

mamp 14 hours ago [-]
Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.
lynguist 13 hours ago [-]
No I think the problem is AI coding removes intentionality. And that introduces artifacts and connections and dependencies that shouldn’t be there if one had designed the system with intent. And that makes it eventually harder to reason about.

There is a difference in qualia in it happens to work and it was made for a purpose.

Business logic will strive more for it happens to work as a good enough.

satisfice 8 hours ago [-]
The core problem is irresponsibility. Things that happen to work may stop working, or be revealed to have terrible flaws. Who is responsible? What is their duty of care?
stoneforger 12 hours ago [-]
Excellent point. The intention of business is profit, how it arrives there is considered incidental. Any product no matter what, as long as it sells. Compounding effects in computing, the internet and miniaturisation, have enabled large profit margins that further compound these effects. They think of this as a machine that can keep on printing more money and subsuming more and more as software and computers are pervasive.
Animats 14 hours ago [-]
Including the AI, which generated it once and forgot.

This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?

maxbond 13 hours ago [-]
I have experimented with telling Claude Code to keep a historical record of the work it is performing. It did work (though I didn't assess the accuracy of the record) but I decided it was a waste of tokens and now direct it to analyze the history in ~/.claude when necessary. The real problem I was solving was making sure it didn't leave work unfinished between autocompacts (eg crucial parts of the work weren't performed and instead there are only TODO comments). But I ended up solving that with better instructions about how to break down the plan into bite-sized units that are more friendly to the todo list tool.

I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.

skeptic_ai 13 hours ago [-]
I for one I save all conversations in the codebase. Includes both human prompts and outputs. But I’m using a modified codex to do so. Not sure why it’s not default as it’s useful to have this info.
luckydata 13 hours ago [-]
Is this an actual problem? Takes minutes for an AI to explore and document a codebase. Sounds like a non problem.
shevy-java 13 hours ago [-]
Is that documentation useful? I haven't seen a well-documented codebase by AI so far.

To be fair - humans also fail at that. Just look at the GTK documentation as an example. When you point that out, ebassi may ignore you because criticism is unwanted; and the documentation will never improve, meaning they don't want new developers.

ahnick 13 hours ago [-]
Yes, exactly my point as well. It cuts both ways.
dcre 7 hours ago [-]
Just because there is someone who could understand a given system, that doesn’t mean there is anyone who actually does. I take the point to be that existing software systems are not understood by anyone most of the time.
sceptic123 8 hours ago [-]
I read it more as:

We already don't know how everything works, AI is steering us towards a destination where there is more of the everything.

I would also add it's also possible it will reduce the number people that are _capable_ of understanding the parts it is responsible for.

g947o 7 hours ago [-]
Who's "we"?

I am sure engineers collectively understand how the entire stack works.

With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct"

sceptic123 4 hours ago [-]
Even as a collective whole, engineers will likely only understand the parts of system that are engineering problems and solutions. Even if they could understand it all, there is still no _one_ person who understands how everything works.
raw_anon_1111 6 hours ago [-]
If the average tenure of a developer is 2.5 years, how likely is it in 5 years that any of the team that started the project is still working on it?
ahnick 13 hours ago [-]
This happens even today. If a knowledgeable person leaves a company and no KT (or more likely, poor KT) takes place, then there will be no one left to understand how certain systems work. This means the company will have to have a new developer go in and study the code and then deduce how it works. In our new LLM world, the developer could even have an LLM construct an overview for him/her to come up to speed more quickly.
stoneforger 12 hours ago [-]
Yes but every time the "why" is obscured perhaps not completely because there's no finished overview or because the original reason cannot be derived any longer from the current state of affairs. Its like the movie memento: you're trying to piece together a story from fragments that seem incoherent.
noosphr 7 hours ago [-]
It's that no one knows if a system works.
cbdevidal 7 hours ago [-]
This also applies to other things. No one person knows how to make a pencil.

Three minute video by Milton Friedman: https://youtu.be/67tHtpac5ws?si=nFOLok7o87b8UXxY

alphazard 7 hours ago [-]
The series this is from (Free to Choose) is a great introduction to economics for people of any age. I highly recommend it.

This particular example can be misinterpreted though. It's true that no single person knows how to make that exact pencil that he is holding. But it's not true that no single individual exists who can make a pencil by themselves. If the criteria is just that it works as a pencil, then many people could make or find something that fills that criteria.

This is an important distinction because there are things like microprocessors, which no single person knows how to make. But also: no single person could alone build something that has anywhere near the same capability. It's conceivable that a civilization could forget how to do something like that because it requires so many people with non-overlapping knowledge to create anything close. We aren't going to forget how to make pencils because it is such a simple problem, that many individuals are capable of figuring out workable solutions alone.

sgarland 6 hours ago [-]
> This is an important distinction because there are things like microprocessors, which no single person knows how to make.

That depends on your definition of “knows how to make.” I worked at Samsung Austin Semiconductor for a while, and there are some insanely smart and knowledgeable people there (and, I’m sure, at every other semiconductor company). It was actually a really good life experience for me, because it grounded and humbled me in the way that only working around borderline genius can.

I can describe to you all the steps that go into manufacturing a silicon wafer, with more detail in my particular area (wet cleans) than others, but I certainly can’t answer any and all questions about the process. However, I am nearly certain that there existed at least one person at SAS who could describe every step of every process in such excruciating detail that, given enough time and skilled workers (you said “know,” not “do” - I am under no delusion that a single person could ever hope to build a fab), they could bootstrap a fab.

bethekidyouwant 3 hours ago [-]
There are still innumerable supply chains they know nothing about. Can they run a strip mine? Fix the diesel trucks that run on the mine?
lloeki 7 hours ago [-]
> This is an important distinction because there are things like microprocessors, which no single person knows how to make. But also: no single person could alone build something that has anywhere near the same capability

I recently watched this: https://www.youtube.com/watch?v=MiUHjLxm3V0

The levels of advanced _whatever_ that we've reached is absurdly bonkers.

It seems to me that at some point in the last 50 or so years the world went from "given a lot of time I can make a _crude_ but reasonably functional version of whatever XYZ in my garage" to "it requires the structural backbone of a whole civilization to achieve XYZ".

Of course it's sort of a delusion. Maybe it's more about the ramp appearing more exponential than ever.

GeoAtreides 6 hours ago [-]
That's false, I know exactly how to make a pencil. I know because I look it up in case I'm mysteriously transported back to the Roman Empire times.

The hard part is finding graphite (somewhere in Wales? looks like lead, but softer and leaves traces on sheep's wool). Then suitable clay to make the lead. Then some kind of glue to glue the two parts of the pencil (boil some bones and cartilages?).

B56b 2 hours ago [-]
The task isn't "make something that you could plausibly call a pencil". It's "understand every step of how a modern pencil is produced".
j_m_b 7 hours ago [-]
Also, no one knows how to make a Pizza! (great book for kids)
germinalphrase 7 hours ago [-]
Somewhat off topic for the thread, but I would love more kids book recommendations for expanding their mental model of the world if we can keep them coming…
pixl97 7 hours ago [-]
"To make an apple pie you must first create the universe "
Bender 7 hours ago [-]
AnimalMuppet 7 hours ago [-]
"How to Make an Apple Pie and See The World" (https://www.amazon.com/Make-Apple-World-Dragonfly-Books/dp/0...). Great kids book.
wtetzner 6 hours ago [-]
I think a lot of people have a fear of AI coding because they're worried that we will move from a world where nobody understands how the whole system works, to a world where nobody knows how any of it works.
cgh 1 hours ago [-]
This comment on the article sums it up for me, at least in part:

“Nobody knows how the whole system works, but at least everybody should know the part of the system they are contributing to.

Being an engineer I am used to be expert of the very layer of the stack I work on, knowing something of the adjacent layers, mostly ignoring how the rest work.

Now that LLMs write my very code, what is the part that I’m supposed to master? I think the table is still shifting and everybody is failing to grasp where it will stabilize. Analogies with past shifts aren’t helping either.‘

rglover 6 hours ago [-]
A valid concern.
chasd00 56 minutes ago [-]
you're perfectly free to read, understand, even edit code created by these coding agents. I must have made that point in a dozen threads just like this one. Do people think because an agent was used then the code is unaccessible to them? When I use these tools i'm constantly reviewing and updating what they output and I feel like I completely understand every line they create. Just like I understand any other code i read.
youarentrightjr 14 hours ago [-]
> Nobody knows how the whole system works

True.

But in all systems up to now, for each part of the system, somebody knew how it worked.

That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.

redrove 14 hours ago [-]
> But in all systems up to now, for each part of the system, somebody knew how it worked.

If the project is legacy or the people just left the company that’s just not true.

youarentrightjr 13 hours ago [-]
> If the project is legacy or the people just left the company that’s just not true.

Yeah, that's why I said "knew" instead of "knows".

tjchear 13 hours ago [-]
I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.

One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.

mhog_hn 13 hours ago [-]
But what if the AI agent has a 5% chance of adding a bug to that feature? Surely before any feature was completely bug free
tjchear 12 hours ago [-]
Yeah it’s all trade offs. If it means I get to where I want to be faster, even if it’s imperfect, so be it.

Humans aren’t without flaws; prior to coding assistants, I’ve lost count of the times my PM telling me to rush things at the expense of engineering rigor. We validate or falsify the need for a feature sooner and move on to other things. Sometimes it works sometimes a bug blows up in our faces, but things still chug along.

This point will become increasingly moot as AI gets better at generating good code, and faster, too.

bethekidyouwant 3 hours ago [-]
What is the chance that you add a bug?
camgunz 10 hours ago [-]
Get enough people in the room and they can describe "the system". Everything OP lists (QAM, QPSK, WPA whatever) can be read about and learned. Literally no one understands generative models, and there isn't a way for us to learn about their workings. These things are entirely new beasts.
whytaka 14 hours ago [-]
But people are expected to understand the part of the system they are responsible for at the level of abstraction they are being paid to operate.

This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.

jstummbillig 13 hours ago [-]
I don't think there is anything new here and the metaphor holds up perfectly. There have always been bugs we don't understand in compilers or libraries or implementations beyond that, that make the path we chose unavailable to us at a certain level. The responsibility is to create a working solution, sure, but there is nothing that would prevent us from getting there by typing "Hey LLM, this is not working, let's try a different approach", even though it might not feel great.
gmuslera 10 hours ago [-]
It is not about having infinite width and depth of knowledge. Is about abstracting at the right level for the components are relevant enough and can assume correctness outside the focus of what you are solving.

Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.

And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?

themafia 16 minutes ago [-]
> But does anybody really understand all of the levels?

Of the top of my head? Most of them. Did you need me to understand some level in particular? I can dedicate time to that if you like. My experience and education will make that a very simple task.

The better question is.. is there any _advantage_ to understanding "all the levels?" If not, then what outcome did you actually expect? A lot of this work is done in exchange for money and not out personal pride or desirous craftsmanship.

You can try to be the "Wizard of Oz" if you want. The problem is anyone can do that job. It's not particularly interesting is it?

MobiusHorizons 4 hours ago [-]
There will always be many gaps in peoples knowledge. You start with what you need to understand, and typically dive deeper only when it is necessary. Where it starts to be a problem in my mind is when people have no curiosity about what’s going on underneath, or even worse, start to get superstitious about avoiding holes in the abstraction without the willingness to dig a little and find out why.
mrkeen 13 hours ago [-]

  Adam Jacob
  It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened. 
This post just doubled down without presenting any kind of argument.

  Bruce Perens
  Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.
conorcleary 11 hours ago [-]
"I don't need to know about hardware, I'm writing software."

"I don't need to know about software engineering, I'm writing code."

"I don't need to know how to design tests, ____ vibe-coded it for me."

CrzyLngPwd 6 hours ago [-]
Oh so many times over the decades, having to explain to a dev why iterating over many things and performing a heavy task like a DB query, will result in bad things happening...all because they don't really comprehend how things work.
_kuno 5 hours ago [-]
"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead
vineethy 6 hours ago [-]
There’s plenty of people that know the fundamentals of the system. It’s a mistake to think that understanding specific technical details about an implementation is necessary to understand the system. It would make more sense to ask questions about whether someone could conceivably build the system from scratch if they have to. There’s plenty of people that have worked in academic fabs that have also written verilog and operating systems and messed with radios.
tosti 13 hours ago [-]
Not just tech.

Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.

jpadkins 2 hours ago [-]
Even if you know how the compiler and OS constructs work, you might not know how the hardware circuits work. Even if you know circuits work, you might not know how the power generation or cooling work. Even if you know how the power generation works, you don't know how extracting natural gas works or solar panels are created. etc,etc

My takeaway is that modern system complexity can only be achieved via advanced specialization and trade. No one human brain can master all of the complexity needed for the wonders of modern tech. So we need to figure out how to cooperate if we want to continue to advance technology.

My views on the topic were influenced by Klings book (it's a light read) https://www.libertarianism.org/books/specialization-trade

markbao 4 hours ago [-]
There’s a difference between abstracting away the network layer and not understanding the business logic. What we are talking about with AI slop is not understanding the business logic. That gets really close to just throwing stuff at the wall and seeing what works instead of a systematic, reliable way to develop things that have predictable results.

It’s like if you are building a production line. You need to use a certain type of steel because it has certain heat properties. You don’t need to know exactly how they make that type of steel. But you need to know to use that steel. AI slop is basically just using whatever steel.

At every layer of abstraction in complexity, the experts at that layer need to have a deep understanding of their layer of complexity. The whole point is that you can rely on certain contracts made by lower layers to build yours.

So no, just slopping your way through the application layer isn’t just on theme with “we have never known how the whole system works”. It’s ignoring that you still have a responsibility to understand the current layer where you’re at, which is the business logic layer. If you don’t understand that, you can’t build reliable software because you aren’t using the system we have in place to predictably and deterministically specify outputs. Which is code.

dizhn 13 hours ago [-]
Let me make it worse. Much worse. :)

https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)

shevy-java 13 hours ago [-]
Adam Jacob's quote is this:

"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."

It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.

He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.

youknownothing 2 hours ago [-]
I see many people comparing the production of code through AI with compilers: just another layer of abstraction. They argue that, in the same way that creating high-level languages that were compiled to assembler meant that most people didn't need to know assembler any more, then specifying specs and letting AI produce the high-level language will mean that most people won't need to know the high-level language any more.

However, there is a fundamental flaw in this analogy: compilers are deterministic, AI is not. You get high-level code and compile it twice, you get exactly the same output. You get specs and generate high-level code through AI twice, you get two different outputs (hopefully with equivalent behaviour).

If you don't understand that deterministic vs. non-deterministic is a fundamental and potentially dangerous change in the way we produce work, then you definitely fail at first principles.

esafak 7 hours ago [-]
It's called specialization. Not knowing everything is how we got this far.
bluGill 7 hours ago [-]
I can't know everything. I have to trust someone else knows some parts and got it right so I can rely on them.

sometimes that trust is proven wrong. I have had to understand my compiler output to prove there was a bug in the optimizer (once I understood the bug I was able to find it was already fixed in a release I hadn't updated to yet). Despite that compilers have earned my trust: It is months of debugging before I think maybe the compiler is wrong.

I am not convinced that AI writes code I can trust - too often I have caught it doing things that are wrong (recently I told it to write some code using TDD - and it put the business logic it was testing in the mock - the tests passed, but manual testing showed the production code didn't have that logic and so didn't work). Until AI code proves it is worth trusting I'm not going to trust it and so I will spend the time needed to understand the code it writes - at great cost to my ability to quickly write code.

theywillnvrknw 4 hours ago [-]
Let me figure out how exactly the human body works before using it.
psychoslave 13 hours ago [-]
To be fair, I don't know how a living human individual work, let alone how they actually work in society. I suspect I'm not alone in this case.

So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.

mhog_hn 13 hours ago [-]
It is the same with the global financial system
conorcleary 11 hours ago [-]
Merger Monday in five, four, three...
css_apologist 4 hours ago [-]
Yes, but the person who understands a lot of the system is invaluable
erelong 4 hours ago [-]
Reminds me of a short writing "I, Pencil"

The problem is education, and maybe ironically AI can assist in improving that

I've read a lot about programming and it all feels pretty disorganized; the post about programmers being ignorant about how compilers work doesn't sound surprising (go to a bunch of educational programming resources and see if they cover any of that)

It sounds like we need more comprehensive and detailed lists

For example, with objections to "vibe coding", couldn't we just make a list of people's concerns and then work at improving AI's outputs which would reflect the concerns people raise? (Things like security, designs to minimize tech debt, outputting for rradability if someone does need to manually review the code in the future, etc.?)

Incidentally this also reminds me of political or religious stances against technology, like the Amish take for example, as the kind of ignorance of and dependence on processes out of our control discussed seem to be inherent qualities of technological systems as they grow and become more complex.

cadamsdotcom 4 hours ago [-]
Huh?

The whole point of society is that you don’t need to know how the whole thing works. You just use it.

How does the water system maintain pressure so water actually comes out when you turn on the tap? That’s entirely the wrong question. You should be asking why you never needed to think about that until now, because that answer is way more mind-expanding and fascinating. Humans invented entire economic systems just so you don’t need to know everything, so you can wash your hands and go back to your work doing your thing in the giant machine. Maybe your job is to make software that tap-water engineers use everyday. Is it a crisis if they don’t understand everything about what you do? Not bloody likely - their heads are full of water engineering knowledge already.

It is not the end of the world to not know everything - it’s actually a miracle of modern society!

snyp 6 hours ago [-]
Script kiddies have always existed and always will.
zhisme 11 hours ago [-]
what a well written article. That's actually a problem. Time will come and hit the same way it has done to aqueduct, like lost technology that no one knows how they have worked in details. Maybe it is just how engineering evolution works?
amelius 12 hours ago [-]
Wikipedia knows how it all works, and that's good enough in case we need to reboot civilization.
kgwxd 3 hours ago [-]
Who cares? Nobody is concerned about that. They're concerned no one will be able to fix stuff when it goes wrong, or there will be no one to blame for really bad problems. Especially when the problem is repeating at 50 petaflops per second.
spenrose 6 hours ago [-]
“Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.”
testing22321 5 hours ago [-]
As an anecdote, I worked at a telco that is the only connectivity for around 50k people spread over a vast area. We had a massive power outage at the CO, and the backup generator failed. Everything, Even 911 was down (a VERY big deal) for basically everyone for most of a day. They stationed police and ambulances in the bigger cities and towns so people could directly ask for help.

With all hands on deck scrambling HARD, a week later we still didn’t have everything back up, because we didn’t know how. A ton of it had never been down since the 60s.

A mess indeed.

fedeb95 12 hours ago [-]
why does the author imply not knowing everything is a bad thing? If you have clear protocol and interfaces, not knowing everything enables you to make bigger innovations. If everything is a complex mess, then no.
bsza 12 hours ago [-]
Not knowing everything never "enables" you to do anything. Knowing how something works is always better than not knowing, assuming you want to use it or make changes to it.
kartoshechka 12 hours ago [-]
engineers pay for abstractions with more powerful hardware, but can optimize at their will (hopefully). will ai be able to afford more human hours to churn through piles of unfamiliar code?
landpt 7 hours ago [-]
The pre-2023 abstractions that power the Internet and have made many people rich are the sweet spot.

You have to understand some of the system, and saying that if no one understands the whole system anyway we can give up all understanding is a fallacy.

Even for a programming language that is criticized for a permissive spec like C you can write a formally verified compiler, CompCert. Good luck doing that for your agentic workflow with natural language input.

Citing a few manic posts from influencers does not change that.

sciencejerk 12 hours ago [-]
We keep delegating knowledge of the natural, physical world for temporary, rapidly-changing knowledge of abstractions and software tools, which we do not control (now LLM cloud tools).

The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.

anon291 13 hours ago [-]
I don't like this thing where we dislike 'magic'

The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever

You don't need to know anything about hardware to properly use a CPU isa.

The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.

mychael 6 hours ago [-]
It's strange to believe that Twitter/X has fallen. Virtually every major character in software, AI and tech is active on X. The people who are actually building the tools that we discuss everyday post on X.

LinkedIn is weeks/months behind topics that originate from X. It suggests you might be living in a bubble if you believe X has fallen.

foxes 7 hours ago [-]
Isn't ceding all power to AIs run by tech companies kinda the opposite - if we have to have AI everywhere? Now no one knows how anything works (instead of everyone knowing a tiny bit and all working together), and also everyone is just dependent on the people with all the compute.
dandanua 4 hours ago [-]
Somebody knows how a part of a complex system works. We can't say this for complex systems created with AI. This is a road into the abyss. The article is making it worse by downplaying the issue.
nish__ 4 hours ago [-]
I do.
knorker 5 hours ago [-]
I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.

But what is not possible is to understand all these levels at the same time. And that has many implications.

Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.

paulddraper 6 hours ago [-]
Understand one layer above (“why”) and one layer below (“how”).

Then you know “what” to build.

cess11 11 hours ago [-]
Yeah, it's not a problem that a particular person does not know it all, but if no one knows any of it except as a black box kind of thing, that is a rather large risk unless the system is a toy.

Edit: In a sense "AI" software development is postmodern, it is a move away from reasoned software development in which known axioms and rules are applied, to software being arbitrary and 'given'.

The future 'code ninja' might be a deconstructionist, a spectre of Derrida.

ForHackernews 6 hours ago [-]
I think there's a difference between "No one understands all levels of the system all the way down, at some point we all draw a line and treat it as a black-box abstraction" vs. "At the level of abstraction I'm working with, I choose not to engage with this AI-generated complexity."

Consider the distinction between I don't know how the automatic transmission in my car works, vs. I never bothered to learn the meanings of the street signs in my jurisdiction.

Atlas667 6 hours ago [-]
This is a non-discussion.

You have to know enough about underlying and higher level systems to do YOUR job well. And AI cannot fully replace human review.

anthk 7 hours ago [-]
9front's manuals will teach you the basics, the actual basics of CS (plan9 intro if you know to adapt yourself, too). These are at /sys/doc. Begin with rc(1), keep upping the levels. You can try 9front in a virtual machine safely. There are instructions to get, download and set it up at https://9front.org .

Write servers/clients with rc(1) and the tools at /bin/aux, such as aux/listen. They already are irc clients and some other tools. Then, do 9front's C book from Nemo.

On floats, try them at 'low level', with Forth. Get Muxleq https://github.com/howerj/mux. Compile it:

          cc -O2 -ffast-math -o muxleq muxleq.c
          
Edit muxleq.fth, set the constants in the file like this:

      1 constant opt.multi      ( Add in large "pause" primitive )
      1 constant opt.editor     ( Add in Text Editor )
      1 constant opt.info       ( Add info printing function )
      0 constant opt.generate-c ( Generate C code )
      1 constant opt.better-see ( Replace 'see' with better version )
      1 constant opt.control    ( Add in more control structures )
      0 constant opt.allocate   ( Add in "allocate"/"free" )
      1 constant opt.float      ( Add in floating point code )
      0 constant opt.glossary   ( Add in "glossary" word )
      1 constant opt.optimize   ( Enable extra optimization )
      1 constant opt.divmod     ( Use "opDivMod" primitive )
      0 constant opt.self       ( self-interpreter [NOT WORKING] )
Recompile your image:

       ./muxleq muxleq.dec < muxleq.fth > new.dec
New.dec will be your main Forth. Run it:

       ./muxleq new.dec
Get the book from the author, look at the code on how the Floating code it's implemented in software. Learn Forth with the Starting Forth book but for ANS forth, and Thinking Forth after doing Starting Forth. Finally, bacl to 9front, there's the 'cpsbook.pdf' too from Hoare on concurrent programming and threads. That will be incredibily useful in a near future. If you are a Go programmer, well, you are at home with CSP.

Also, compare CSP to the concurrent Forth switching tasks. It's great to compare/debug code in a tiny Forth on Subleq/Muxleq because if your code gets relatively fast, it will fly under GForth and due to constraints you will force yourself to be a much better programmer.

CPU's? Cache's? RAM latency? Muxleq/Subleq behaves nearly the same everywhere depending on your simulation speed. In order to learn, it's there. On real world systems, glibc, the Go runtime, etc, will take care of that making a similar outcome everyhere. If not, most of the people out there will be aware of stuff from SSE2 and up to NEON under ARM.

Hint: they already are code transpilers from Intel dedicated instructions to ARM ones and viceversa.

>How garbage collection works inside of the JVM?

No, but I can figure it a little given the Zenlisp one as a slight approximation. Or... you know, Forth, by hand. And Go which seems easiers and it doesn't need a dog slow VM trying to replicate what Inferno did in the 90's which far less resources.

bsder 13 hours ago [-]
Sure, we have complex systems that we don't know how everything works (car, computer, cellphone, etc.) . However, we do expect that those systems behave deterministically in their interface to us. And when they don't, we consider them broken.

For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.

Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.

The question is: Who will build the HP-12C of AI?

ltbarcly3 4 hours ago [-]
I mean the quotes in this article aren't even disagreeing except on vague value judgements with no practical consequences.

Yes you can make better and more perfect solutions with a deep understanding of every consequence of every design decision. Also you can make some real world situation thousands of times better without a deep understanding of things. These two statements don't disagree at all.

The MIPS image rendering example is perfect here. Notice he didn't say "there was some obscure attempt to load images on MIPS and nobody used it because it was so slow so they used the readily available fast one instead". There was some apparently widely used routine to load images that was popular enough it got the attention of one of the few people who deeply understands how the system worked, and they fixed it up.

PHP is an awful trash language and like half the internet was built on it and lots of people had a lot of fun and got a lot more work done because people wrote a lot of websites in PHP. Sure, PHP is still trash, but it's better to have trash than wait around for someone to 'do it right', and maybe nobody ever gets around to it.

Worse is better. https://en.wikipedia.org/wiki/Worse_is_better

usgroup 12 hours ago [-]
[dead]
faresfa 1 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:06:47 GMT+0000 (Coordinated Universal Time) with Vercel.