>someone raised the question of “what would be the role of humans in an AI-first society”.
Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:
- 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'
- 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'
- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'
The human purpose is not to compete but to safeguard the telology (purpose) of the system.
WarmWash 13 minutes ago [-]
>- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'
I have this vision that in absence of the ability for people to form social hierarchies on the back of their economic value to society, there will be this AI fueled class hierarchy of people's general social ability. So rather than money determining your neighborhood, your ability to not be violent or crazy does.
erikerikson 3 minutes ago [-]
This seems to suggest a single dimensional evaluation. The complexity of social compatibility is high and the potential capacity to evaluate could also be greater.
whenever i worry that AI will eventually do all the work I remind myself that the world is full of almost infinite problems and we'll continue to have a choice to be problem solvers over just consumers.
7777777phil 4 days ago [-]
API prices dropped 97% in two years so the model layer is already a commodity. The question is which context layer actually sticks. The OpenClaw example in the article (400K lines to 4K) is a nice proof point for what happens when context replaces code.
I've been arguing for some time now that it's the "organizational world model," the accumulated process knowledge unique to each company that's genuinely hard to replicate. I did a full "report" about the six-layer decomposition here: https://philippdubach.com/posts/dont-go-monolithic-the-agent...
steveBK123 23 minutes ago [-]
The way many corporates are using the models nearly interchangeably as relative quality/value changes release to release, AND the API price drops do make me question what the model moat even is.
If LLMs are going to make intelligence a commodity in some sense, where does the value end up accruing will be the question. Picks/shovels companies and all the end user case products being delivered? Mainframes value didn't primarily accrue to DEC. PCs value didn't really accrue to IBM. Internets value didn't accrue to Netscape. Mobiles value didn't only accrue to Apple.
One reminder that new efficiency / greatly lowered costs sometimes doesn't replace work (or at least not 1-1) but simply makes things that were never economical possible. Example you hear about AI agents that will basically behave like a personal assistant. 99% of the rich world cannot afford a human personal assistant today, but I guess if it was a service as part of their Apple Intelligence / Google something / Office365 subscription they'd use it.
We seem to be continually creating new types of jobs. Only a few generations ago, 75% of people worked on farms. Farm jobs still exist you just don't need so many people.
The type of work my father and grandfather did still exist. My father's job didn't really exist in his father's time. The work I do did not exist as options during their careers. The next generation will be doing some other type of work for some other type of company that hasn't been imagined yet.
martin_drapeau 8 minutes ago [-]
100%
Currently integrating an AI Assistant with read tools (Retrieval-Augmented Generation or RAG as they say). Many policies we are writing are providing context (what are entities and how they relate). Projecting to when we add write tools, context is everything.
apsurd 1 hours ago [-]
From your link:
> Closing that gap, building systems that capture and encode process knowledge rather than just decision records, is the highest-value problem in enterprise AI right now.
I buy this. What exactly is the export artifact that encodes this built up context? Is it the entire LLM conversation log. My casual understanding of MCP is service/agent to agent "just in time" context which is different from "world model" context, is that right?
i'm curious is there's an entirely new format for this data that's evolving, or if it's as blunt as exporting the entire conversation log or embeddings of the log, across AIs.
7777777phil 35 minutes ago [-]
The MCP point is right, though tbh MCP is more like plumbing than memory. Execution-time context for tools and resources. The world model is a different thing entirely, it needs to persist across sessions, accumulate, actually be queryable.
In practice it's mostly RAG over structured artifacts. Process docs, decision logs, annotated code and so on. Conversation history works better than you'd expect as a starting point but gets noisy fast and I haven't seen a clean pruning strategy anywhere...
On the format question imo nobody really knows yet. Probably ends up as some kind of knowledge graph with typed nodes that MCP servers expose or so, but I haven't seen anyone build that cleanly. Most places are still doing RAG over PDFs so. That tells you where the friction is.
amirhirsch 1 hours ago [-]
Not sure about the conclusion regarding NVidia value capture. I imagine the context for many applications will come from a physical simulation environment running in dramatically more GPUs than the AI part.
farcitizen 4 days ago [-]
Great Article. And this idea is Largely behind all the new Microsoft IQ products, Work IQ, Foundry IQ, Fabric IQ. Giving the Agents Context of all relevant enterprise data to do their job.
qsera 51 minutes ago [-]
Ah another article that implies the inevitable AI apocalypse disguised as a thought piece!
simianwords 32 minutes ago [-]
I have my own challenge: I think LLMs can do everything that a human can do and typically way better if the context required for the problem can fit in 10,000 tokens.
For now this challenge is text only.
Can we think of anything that LLMs can’t do?
seanhunter 7 minutes ago [-]
This is a “no true scotsman” challenge. People are going to say llms can’t do certain things and you are going to say they can.
Not very interesting.
simianwords 54 seconds ago [-]
Let’s ask in good faith. Can you suggest something that it can’t do? Functional things. I’ll reply in good faith and consider it.
badgersnake 30 minutes ago [-]
* code
* write interesting prose
* generate realistic images
infecto 7 minutes ago [-]
> Only really dumb people think that. Or maybe you are an LLM.
You deleted it but still come on. Why would you even think to write that?
simianwords 29 minutes ago [-]
It can do all of them. I also said text only.
24 minutes ago [-]
philipwhiuk 43 minutes ago [-]
> But the topic of conversation that I enjoyed the most was when someone raised the question of “what would be the role of humans in an AI-first society”. Some were skeptical about whether we are ever going to reach an AI-first society. If we understand as an AI-first society, one where the fabric of the economy and society is automated through agents interacting with each other without human interaction, I think that unless there is a catastrophic event that slows the current pace of progress, we may reach a flavor of this reality in the next decade or two.
I don't really know how you can make this prediction and be taken seriously to be honest.
Either you think it's the natural result of the current LLM products, in which case a decade looks way too long.
Or you think it requires a leap of design in which case it's kind of an unknown when we get to that point and '10 to 20 years' is probably just drawn from the same timeframe as the 'fusion as a viable source of electricity' predictions - i.e. vague guesswork.
keiferski 6 minutes ago [-]
Right now, 30 seconds ago, I asked ChatGPT to tell me about a book I found that was written in the 60s.
It made up the entire description. When I pointed this out, it apologized and then made up another description.
The idea that this is going to lead to superintelligence in a few years is absolutely nonsense.
steveBK123 32 minutes ago [-]
Right, if thought of as a tool for automation then AI is going to add productivity/efficiency gains, disrupt industries, cause some labor upheaval, etc.
If someone is proposing that an "AI first" society is inevitable, I'd ask if they think we live in a "computer first" or "machine first" society today?
If its so existential and society-altering as "AI first society" implies, then we'd more likely have the Dune timeline here as humans have agency and stuff happens. At some point those in control take so disproportionately that societal upheaval pushes back.
Why the fuck would we ever want an AI-first society
45 minutes ago [-]
AIorNot 46 minutes ago [-]
"what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job"
Umm I have been working in AI in multiple verticals for the past 3 years and I have been far busier and more stressed with far less job security than past 15 before that in tech.
Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:
- 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'
- 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'
- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'
The human purpose is not to compete but to safeguard the telology (purpose) of the system.
I have this vision that in absence of the ability for people to form social hierarchies on the back of their economic value to society, there will be this AI fueled class hierarchy of people's general social ability. So rather than money determining your neighborhood, your ability to not be violent or crazy does.
I used Anthropic to analyze the situation, it did halfway decent:
https://unratified.org/why/
https://news.ycombinator.com/item?id=47263664
I've been arguing for some time now that it's the "organizational world model," the accumulated process knowledge unique to each company that's genuinely hard to replicate. I did a full "report" about the six-layer decomposition here: https://philippdubach.com/posts/dont-go-monolithic-the-agent...
If LLMs are going to make intelligence a commodity in some sense, where does the value end up accruing will be the question. Picks/shovels companies and all the end user case products being delivered? Mainframes value didn't primarily accrue to DEC. PCs value didn't really accrue to IBM. Internets value didn't accrue to Netscape. Mobiles value didn't only accrue to Apple.
One reminder that new efficiency / greatly lowered costs sometimes doesn't replace work (or at least not 1-1) but simply makes things that were never economical possible. Example you hear about AI agents that will basically behave like a personal assistant. 99% of the rich world cannot afford a human personal assistant today, but I guess if it was a service as part of their Apple Intelligence / Google something / Office365 subscription they'd use it.
We seem to be continually creating new types of jobs. Only a few generations ago, 75% of people worked on farms. Farm jobs still exist you just don't need so many people.
The type of work my father and grandfather did still exist. My father's job didn't really exist in his father's time. The work I do did not exist as options during their careers. The next generation will be doing some other type of work for some other type of company that hasn't been imagined yet.
Currently integrating an AI Assistant with read tools (Retrieval-Augmented Generation or RAG as they say). Many policies we are writing are providing context (what are entities and how they relate). Projecting to when we add write tools, context is everything.
I buy this. What exactly is the export artifact that encodes this built up context? Is it the entire LLM conversation log. My casual understanding of MCP is service/agent to agent "just in time" context which is different from "world model" context, is that right?
i'm curious is there's an entirely new format for this data that's evolving, or if it's as blunt as exporting the entire conversation log or embeddings of the log, across AIs.
In practice it's mostly RAG over structured artifacts. Process docs, decision logs, annotated code and so on. Conversation history works better than you'd expect as a starting point but gets noisy fast and I haven't seen a clean pruning strategy anywhere...
On the format question imo nobody really knows yet. Probably ends up as some kind of knowledge graph with typed nodes that MCP servers expose or so, but I haven't seen anyone build that cleanly. Most places are still doing RAG over PDFs so. That tells you where the friction is.
For now this challenge is text only.
Can we think of anything that LLMs can’t do?
Not very interesting.
* write interesting prose
* generate realistic images
You deleted it but still come on. Why would you even think to write that?
I don't really know how you can make this prediction and be taken seriously to be honest.
Either you think it's the natural result of the current LLM products, in which case a decade looks way too long.
Or you think it requires a leap of design in which case it's kind of an unknown when we get to that point and '10 to 20 years' is probably just drawn from the same timeframe as the 'fusion as a viable source of electricity' predictions - i.e. vague guesswork.
It made up the entire description. When I pointed this out, it apologized and then made up another description.
The idea that this is going to lead to superintelligence in a few years is absolutely nonsense.
If someone is proposing that an "AI first" society is inevitable, I'd ask if they think we live in a "computer first" or "machine first" society today?
If its so existential and society-altering as "AI first society" implies, then we'd more likely have the Dune timeline here as humans have agency and stuff happens. At some point those in control take so disproportionately that societal upheaval pushes back.
I used Anthropic to analyze the situation, it did halfway decent:
https://unratified.org/why/
https://news.ycombinator.com/item?id=47263664
Umm I have been working in AI in multiple verticals for the past 3 years and I have been far busier and more stressed with far less job security than past 15 before that in tech.
For now this is far more accurate: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
Wake me up when the computers run the world and I can relax..but I don't think its happening in my lifetime.