> It remains unclear whether continuing to throw vast quantities of silicon and ever-bigger corpuses at the current generation of models will lead to human-equivalent capabilities. Massive increases in training costs and parameter count seem to be yielding diminishing returns. Or maybe this effect is illusory. Mysteries!
I’m not even sure whether this is possible. The current corpus used for training includes virtually all known material. If we make it illegal for these companies to use copyrighted content without remuneration, either the task gets very expensive, indeed, or the corpus shrinks. We can certainly make the models larger, with more and more parameters, subject only to silicon’s ability to give us more transistors for RAM density and GPU parallelism. But it honestly feels like, without another “Attention is All You Need” level breakthrough, we’re starting to see the end of the runway.
embedding-shape 39 seconds ago [-]
[delayed]
stickfigure 36 minutes ago [-]
I think it's too early to declare the Turing test passed. You just need to have a conversation long enough to exhaust the context window. Less than that, since response quality degrades long before you hit hard window limits. Even with compaction.
Neuroplasticity is hard to simulate in a few hundred thousand tokens.
criley2 23 minutes ago [-]
For as rigorous of a Turing test as you present, I believe many (or even most) humans would also fail it.
How many humans seriously have the attention span to have a million "token" conversation with someone else and get every detail perfect without misremembering a single thing?
nine_k 15 minutes ago [-]
But context window exhaustion does not look like mere forgetfulness, but more like loss of general coherence, like getting drunk.
beders 17 minutes ago [-]
Thank you for putting it so succinctly.
I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency
and that the term AI is just completely overloaded right now.
dwallin 49 minutes ago [-]
Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.
I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
bee_rider 1 minutes ago [-]
We shouldn’t try to build a worse version of a human. We should try to build a better compiler and encyclopedia.
n4r9 39 minutes ago [-]
The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.
NiloCK 10 minutes ago [-]
I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.
Frieren 34 minutes ago [-]
> Some people point at LLMs confabulating
No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
Confabulation has to do with degradation of biological processes and information storage.
There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).
stronglikedan 25 minutes ago [-]
I don't even think they bullshit, since that requires conscious effort that they do not an cannot possess. They just simply interpret things incorrectly sometimes, like any of us meatbags.
thayne 8 minutes ago [-]
They make incorrect predictions of text to respond to prompts.
The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.
knowaveragejoe 33 minutes ago [-]
> No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.
marssaxman 23 minutes ago [-]
They're presumably referring to the Harry Frankfurt definition of bullshit: "speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."
simianwords 33 minutes ago [-]
You seem confident. Can you get it to bullshit on GPT-5.4 thinking? Use a text prompt spanning 3-4 pages and lets see if it gets it wrong.
I haven't seen any counter examples, so you may give some examples to start with.
zeroonetwothree 42 minutes ago [-]
And is that considered a feature of humans or a bug?
Is it something we want to emulate?
margalabargala 32 minutes ago [-]
The suggestion is that it is an intrinsic quality and therefore neither a feature nor a bug.
It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.
If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.
FloorEgg 43 minutes ago [-]
Yes, and to me the evolution of life sure looks like an evolution of more truthful models of the universe in service of energy profit. Better model -> better predictions -> better profit.
I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.
telephone3 35 minutes ago [-]
My understanding is that this is the opposite of what is typically understood to be true - organisms with less truthful (more reductive/compressed) perception survive better than those with more complete perception. "Fitness beats truth."
throwaway27448 17 minutes ago [-]
Humans can be reasoned with, though, and are capable of learning.
sillyfluke 31 minutes ago [-]
If you want to call it that, I find the confabulation in LLMs extreme. That level of confabulation would most likely be diagnosed as dementia in humans.[0] Hence, it is considered a bug not a feature in humans as well.
Now imagine a high-skilled software engineer with dementia coding safety-critical software...
> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.
Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?
You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.
nothinkjustai 31 minutes ago [-]
It’s a failure mode of humans, it’s the entire mode of LLMs.
I think the discussion has to be more nuanced than this. "LLMs still can't do X so it's an idiot" is a bad line of thought. LLMs with harnesses are clearly capable of engaging with logical problems that only need text. LLMs are not there yet with images, but we are improving with UI and access to tools like figma. LLMs are clearly unable to propose new, creative solutions for problems it has never seen before.
throwaway27448 18 minutes ago [-]
> LLMs with harnesses are clearly capable of engaging with logical problems that only need text.
To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.
__alexs 13 minutes ago [-]
Solving arbitrary logical problems seems to be equivalent to solving the halting problem so you are probably wise not to make that bet.
There's been a lot of progress there, it's just that an LLM that's best for, say coding, isn't going to be also the best for image edit.
nomdep 18 minutes ago [-]
"As LLMs etc. are deployed in new situations, and at new scale, there will be all kinds of changes in work, politics, art, sex, communication, and economics."
For an article five years in the making, this is what I expected it to be about. Instead, we got a ramble about how imperfect LLMs are right now.
52-6F-62 4 minutes ago [-]
> Instead, we got a ramble about how imperfect LLMs are right now.
I wager this is a point that needs beaten into the common psyche. After all, it's been sold that it is not an imperfect tool, but the solution to all of our problems in every field forever. That's why these companies need billions upon billions of dollars of public subsidies and investments that would otherwise find their way to more pragmatic ends.
bstsb 1 hours ago [-]
if you can’t access the page through region blocks:
I have a question for all the "humans make those mistakes too" people in this thread, and elsewhere: have you ever read, or at least skimmed a summary of, "The Origin of Consciousness in the Breakdown of the Bicameral Mind"? Did you say "yeah, that sounds right"? Do you feel that your consciousness is primarily a linguistic phenomenon?
I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).
The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.
(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)
xandrius 18 minutes ago [-]
It feels like you probably went too deep in the LLM bandwagon.
An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).
Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.
Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.
This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.
simianwords 16 minutes ago [-]
If it were not "just a statistical next token machine", how different would it behave?
Can you find an example and test it out?
nine_k 19 minutes ago [-]
If you look at different ancient traditions, you will notice how they struggle with the limitations of language, with its inability to represent certain things that are not just crucial for understanding the world, but also are even somehow communicable. Buddhists dug into that in a very analytical, articulate way, for instance.
Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.
stavros 17 minutes ago [-]
I think there are two types of discussions, when it comes to LLMs: Some people talk about whether LLMs are "human" and some people talk about whether LLMs are "useful" (ie they perform specific cognitive tasks at least as well as humans).
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
delusional 23 minutes ago [-]
> I'm struggling to find the right description
I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.
I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.
Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.
Kuyawa 49 minutes ago [-]
And the past too, if we've been paying attention
embedding-shape 44 minutes ago [-]
> In general, ML promises to be profoundly weird. Buckle up.
I love that it ends with such a positive note, even though it's generally a critical article, at least it's well reasoned and not utterly hyping/dooming something.
Thanks yet again Kyle!
PaulDavisThe1st 36 minutes ago [-]
While the economic, energy, political and social issues associated with LLMs ought to be enough to nix the adoption that their boosters are seeking ...
... I still think there is an interesting question to be investigated about whether, by building immensely complex models of language, one of our primary ways that we interact with, reason about and discuss the world, we may not have accidentally built something with properties quite different than might be guessed from the (otherwise excellent) description of how they work in TFA.
I agree with pretty much everything in TFA, so this is supplemental to the points made there, not contesting them or trying to replace them.
slopinthebag 15 minutes ago [-]
Great series of articles, thank you. It's exhausting reading a deluge of (often AI generated) comments from people claiming wild things about LLM's, and it's nice to hear some sanity enter the conversation.
32 minutes ago [-]
perching_aix 38 minutes ago [-]
This is like all the usual anti-LLM talking points and sentiments fused together.
Doesn't it get boring?
I like using these models a lot more than I stand hearing people talk about them, pro or contra. Just slop about slop. And the discussions being artisanal slop really doesn't make them any better.
Every time I hear some variation of bullshitting or plagiarizing machines, my eyes roll over. Do these people think they're actually onto something? I've been seeing these talking points for literal years. For people who complain about no original thoughts, these sure are some tired ones.
masfuerte 26 minutes ago [-]
Why do you insist on reading and commenting on these articles that bore you so much?
stavros 13 minutes ago [-]
Because saying "this is boring, let's stop talking about it" is an opinion worthwhile of expression.
simianwords 24 minutes ago [-]
Its usual gibberish that tries to throw many darts and see what sticks. Oh LLM's steal other people's work? Check. Oh LLM's cause ecological damage? Check. Oh LLM's hallucinate? Check.
When you see a pattern like this, you know that its not coming from any place of truth but rather ideology
stavros 10 minutes ago [-]
Yeah, it gets really boring. Whenever I see "slot machines" or "bullshit machines" or whatever, I just ignore the comment and move on, because it signals that it's someone in such deep denial that they've turned their brain off.
I'd much rather read articles about what LLMs can/can't do, or stuff people have built with LLMs, than read how everything LLMs touch turns to shit.
giraffe_lady 25 minutes ago [-]
"These arguments may be correct but they aren't novel" ??
simianwords 23 minutes ago [-]
I don't think calling AI a bullshit machine is correct. In spirit.
LogicFailsMe 17 minutes ago [-]
Old and stupid hot take IMO. I want the time back I put into perusing this. Even the scale of LLMs is puny next to the scale of lying humans and the sheer impact one compulsively lying human can have given we love to be led by confidently wrong narcissists. I mean if that isn't obvious by now, I guess it never will be. The Vogon constructor fleet is way overdue in my book.
bensyverson 1 hours ago [-]
I get the frustration, but it's reductive to just call LLMs "bullshit machines" as if the models are not improving. The current flagship models are not perfect, but if you use GPT-2 for a few minutes, it's incredible how much the industry has progressed in seven years.
It's true that people don't have a good intuitive sense of what the models are good or bad at (see: counting the Rs in "strawberry"), but this is more a human limitation than a fundamental problem with the technology.
the_snooze 1 hours ago [-]
Two things can be true at the same time: The technology has improved, and the technology in its current state still isn't fit for purpose.
I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.
The most intellectually honest way to evaluate these things is how they behave now on real tasks. Not with some unfalsifiable appeal to the future of "oh, they'll fix it."
hedgehog 1 hours ago [-]
The errors are also not distributed in the same way as you'd expect from a human. The tools can synthesize a whole feature in a moderately complicated web app including UI code, schema changes, etc, and it comes out perfectly. Then I ask for something simple like a shopping list of windshield wipers etc for the cars and that comes out wildly wrong (like wrong number of wipers for the cars, not just the wrong parts), stuff that a ten year old child would have no trouble with. I work in the field so I have a qualitative understanding of this behavior but I think it can be extremely confusing to many people.
bensyverson 49 minutes ago [-]
> the technology in its current state still isn't fit for purpose.
This is a broad statement that assumes we agree on the purpose.
For my purpose, which is software development, the technology has reached a level that is entirely adequate.
Meanwhile, sports trivia represents a stress test of the model's memorized world knowledge. It could work really well if you give the model a tool to look up factual information in a structured database. But this is exactly what I meant above; using the technology in a suboptimal way is a human problem, not a model problem.
the_snooze 39 minutes ago [-]
There's nothing in these models that say its purpose is software development. Their design and affordances scream out "use me for anything." The marketing certainly matches that, so do the UIs, so do the behaviors. So I take them at their word, and I see that failure modes are shockingly common even under regular use. I'm not out to break these things at all. I'm being as charitable and empirical as I can reasonably be.
If the purpose is indeed software development with review, then there's nothing stopping multi-billion dollar companies from putting friction into these sytems to direct users towards where the system is at its strongest.
nradov 25 minutes ago [-]
The LLM vendors are selling tokens. Why would they put friction into selling more tokens? Caveat emptor.
jerf 55 minutes ago [-]
One of the reasons I'm comfortable using them as coding agents is that I can and do review every line of code they generate, and those lines of code form a gate. No LLM-bullshit can get through that gate, except in the form of lines of code, that I can examine, and even if I do let some bullshit through accidentally, the bullshit is stateless and can be extracted later if necessary just like any other line of code. Or, to put it another way, the context window doesn't come with the code, forming this huge blob of context to be carried along... the code is just the code.
That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.
And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".
The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.
nradov 28 minutes ago [-]
Which things actually matter? I think we can all agree that an LLM isn't fit for purpose to control a nuclear power plant or fly a commercial airliner. But there's a huge spectrum of things below that. If an LLM trading error causes some hedge fund to fail then so what? It's only money.
floren 37 minutes ago [-]
Six months bro, we're still so early
simianwords 29 minutes ago [-]
> I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.
95% is not my experience and frankly dishonest.
I have ChatGPT open right now, can you give me examples where it doesn't work but some other source may have got it correct?
I have tested it against a lot of examples - it barely gets anything wrong with a text prompt that fits a few pages.
> The most intellectually honest way to evaluate these things is how they behave now on real tasks
A falsifiable way is to see how it is used in real life. There are loads of serious enterprise projects that are mostly done by LLMs. Almost all companies use AI. Either they are irresponsible or you are exaggerating.
Lets be actually intellectually honest here.
qsera 4 minutes ago [-]
>95% is not my experience and frankly dishonest.
Quite frankly, this is exactly like how two people can use the same compression program on two different files and get vastly different compression ratios (because one has a lot of redundancy and the other one has not).
simianwords 3 minutes ago [-]
I'm asking for a single example.
qsera 44 seconds ago [-]
But why do you need an example? Isn't it pretty well understood that LLMS will have trouble responding to stuff that is under represented in the training data?
Arainach 1 hours ago [-]
Whether LLMs can create correct content doesn't matter. We've already seen how they are being used and will be used.
Fake content and lies. To drive outrage. To influence elections. To distract from real crimes. To overload everyone so they're too tired to fight or to understand. To weaken the concept that anything's true so that you can say anything. Because who cares if the world dies as long as you made lots of money on the way.
danny_codes 53 minutes ago [-]
> Because who cares if the world dies as long as you made lots of money on the way.
Guiding principle of the AI industry
gdulli 44 minutes ago [-]
It's really the whole tech industry as it exists right now and AI is a victim of bad timing. If this AI had been invented 40 years ago there'd have been a lower ceiling on the damage it could do.
Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)
palmotea 6 minutes ago [-]
> Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)
Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.
cindyllm 1 minutes ago [-]
[dead]
gdulli 1 hours ago [-]
Computer graphics have been improving for decades but the uncanny valley remains undefeated. I don't know why anyone expects a breakthrough in other areas. There's a wall we hit and we don't understand our own consciousness and effectiveness well enough to replicate it.
PaulKeeble 38 minutes ago [-]
In computer graphics we understand how it works, we just lack the computational power to do it real time, but we can with sufficient processing produce realistic looking images with physically accurate lighting. But when it comes to cognition its a lot of guesswork, we haven't yet mapped out the neuron connections in a brain, we haven't validated it works as popular science writing suggests. We don't understand intelligence, so all we can do is accidentally bumble into it and it seems unlikely that will just happen especially when its so hard to compute what we are already doing.
kritiko 45 minutes ago [-]
We have credible deepfakes on demand. (To be fair, there have been deceptive photos as long as photos have existed, but the cost of automating their creation going to basically zero has a social impact)
zdragnar 1 hours ago [-]
That's not why the author calls them bullshit machines.
> One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
Yes, there have been improvements on them, but none of those improvements mitigate the core flaw of the technology. The author even acknowledges all of the improvements in the last few months.
p_stuart82 49 minutes ago [-]
models are improving. the pricing already assumes they're ready for prod. that's where the fires start
karmakaze 51 minutes ago [-]
Bullshit is the perfect term here, even as AI's get so much better and capable Brandolini's Law aka the "bullshit asymmetry principle" always applies--the energy required to refute misinformation is an order of magnitude larger than that needed to produce it. Even to use AIs effectively today requires a very good BS detector--some day in the future it won't.
ura_yukimitsu 51 minutes ago [-]
Calling LLMs "bullshit machines" is a reference to a 2024 paper [1] which itself uses the concept of "bullshit" as defined in the essay/book "On Bullshit" by Harry G. Frankfurt [2]. The TL;DR is that LLMs are fundamentally bullshit machines because they are only made to generate sentences that sound plausible, but plausible does not always mean true.
it's not a bullshit machine because its output is bad, it's a bullshit machine because its output is literally 'bullshit' as in, output that is statistically likely but with no factual or reasoning basis. as the models have improved, their bullshit is more statistically likely to sound coherent (maybe even more likely to be 'accurate'), but no more factual and with no more reasoning.
1 hours ago [-]
4ndrewl 59 minutes ago [-]
It doesn't matter how good the models become. They can only deal in bullshit, in the academic use of the term.
Scaevolus 1 hours ago [-]
They are bullshit machines because they do not have an internal mental model of truth like a human does. The flagship models bullshit less, but their fundamental architectures prevent having truth interfere with output.
"Bullshit" is a human concept. LLMs do not work like the human brain, so to call their output "bullshit" is ascribing malice and intent that is simply not there. LLMs do not "think." But that does not mean they're not incredibly powerful and helpful in the right context.
ajross 1 hours ago [-]
> it's reductive to just call LLMs "bullshit machines" as if the models are not improving
This is true, but I prefer to think of it as "It's delusional to pretend as if human beings are not bullshit machines too".
Lies are all we have. Our internal monologue is almost 100% fantasy. Even in serious pursuits, that's how it works. We make shit up and lie to ourselves, and then only later apply our hard-earned[1] skill prompts to figure out whether or not we're right about it.
How many times have the nerds here been thinking through a great new idea for a design and how clever it would be before stopping to realize "Oh wait, that won't work because of XXX, which I forgot". That's a hallucination right there!
[1] Decades of education!
kolektiv 1 hours ago [-]
I'm not entirely sure I can agree, although the premise is seductive in certain ways. We do lie to ourselves, but we also have meta-cognition - we can recognise our own processes of thought. Imperfect as it may be, we have feedback loops which we can choose to use, we have heuristics we can apply, we can consciously alter our behaviour in the presence of contextual inputs, and so on.
Being wrong is not the same as a hallucination. It's a natural step on a journey to being more right. This feels a bit like Andreesen proudly stating he avoids reflection - you can act like that, but the human brain doesn't have to. LLMs have no choice in the matter.
iamjackg 1 hours ago [-]
The problem, unfortunately, is the scale. It's always scale. Humans make all the kinds of mistakes that we ascribe to LLMs, but LLMs can make them much faster and at much larger scale.
Models have gotten ridiculously better, they really have, but the scale has increased too, and I don't think we're ready to deal with the onslaught.
SkyBelow 1 hours ago [-]
Scale is very different, but I wonder if human trust isn't the real issue. We trust technology too much as a group. We expect perfection, but we also assume perfection. This might be because the machines output confident sounding answers and humans default to trusting confidence as an indirect measure for accuracy, but I think there is another level where people just blindly trust machines because they are so use to using them for algorithms that trend towards giving correct responses.
Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug. To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.
We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.
nyeah 54 minutes ago [-]
"Lies are all we have."
If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?
ajross 15 minutes ago [-]
> If so, how do we distinguish between code that works and code that doesn't work?
Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.
nothinkjustai 25 minutes ago [-]
So your logic is humans and LLMs are the same because humans are wrong sometimes?
ajross 14 minutes ago [-]
Pretty much, yeah. Or rather, the fact that we're both reliably wrong in identifiably similar ways makes "we're more alike than different" an attractive prior to me.
nothinkjustai 1 minutes ago [-]
“More alike than different” is reasonable I think, as long as we’re talking about how we have some of the same failure modes. Although the way we get there is quite different.
I’m still not a big fan of comparing humans and LLMs because LLMs lack so much of what actually makes us human. We might bullshit or be wrong because of many reasons that just don’t apply to LLMs.
AnimalMuppet 46 minutes ago [-]
Humans are different. Humans - at least thoughtful humans - know the difference between knowing something and not knowing something. Humans are capable of saying "I don't know" - not just as a stream of tokens, but really understanding what that means.
ajross 13 minutes ago [-]
> Humans - at least thoughtful humans - know the difference between knowing something and not knowing something.
Your no-true-scotsman clause basically falsifies that statement for me. Fine, LLMs are, at worst I guess, "non-thoughtful humans". But obviously LLMs are right an awful lot (more so than a typical human, even), and even the thoughtful make mistakes.
So yeah, to my eyes "Humans are NOT different" fits your argument better than your hypothesis.
(Also, just to be clear: LLMs also say "I don't know", all the time. They're just prompted to phrase it as a criticism of the question instead.)
bitwize 33 minutes ago [-]
The fact that these "bullshit machines" have already proven themselves relatively competent at programming, with upcoming frontier models coming close to eliminating it as a human activity, probably says a lot about the actual value and importance of programming in the scheme of things.
ambicapter 1 hours ago [-]
The recent article of Sam Altman described pretty much as a compulsive liar. Would it be any surprise if his most impactful contribution to the world was a machine that compulsively lies?
embedding-shape 42 minutes ago [-]
How could it be that we humans hardly even agree on what "knowledge" truly is, yet somehow this machine learning algorithm somehow "compulsively lies"? How would it even know what is a lie, and how could something lacking autonomy in the first place do anything compulsively?
quantummagic 34 minutes ago [-]
This is a good point. As much as there is too much breathless enthusiasm for AI, there is also a lot of emotionally manipulative and hyperbolic language used by skeptics. We're warned not to anthropomorphize, and then hear about AI's compulsive lying, or "hallucinations", in the next.
sph 45 minutes ago [-]
He sought to create God in his image, that's a narcissist's wet dream.
josefritzishere 1 hours ago [-]
I appreciate the directness of calling LLMs "Bullshit machines." This terminology for LLMs is well established in academic circles and is much easier for laypeople to understand than terms like "non-deterministic." I personally don't like the excessive hype on the capabilities of AI. Setting realistic expectations will better drive better product adoption than carpet bombing users with marketing.
AStrangeMorrow 59 minutes ago [-]
I have still mixed feelings about LLMs.
If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.
On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself
simianwords 28 minutes ago [-]
If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? Lets take any example of a text prompt fitting a few pages - it may be a question in science or math or any domain. Can you get it to bullshit?
nilslice 8 minutes ago [-]
[dead]
Rendered at 17:05:05 GMT+0000 (Coordinated Universal Time) with Vercel.
I’m not even sure whether this is possible. The current corpus used for training includes virtually all known material. If we make it illegal for these companies to use copyrighted content without remuneration, either the task gets very expensive, indeed, or the corpus shrinks. We can certainly make the models larger, with more and more parameters, subject only to silicon’s ability to give us more transistors for RAM density and GPU parallelism. But it honestly feels like, without another “Attention is All You Need” level breakthrough, we’re starting to see the end of the runway.
Neuroplasticity is hard to simulate in a few hundred thousand tokens.
How many humans seriously have the attention span to have a million "token" conversation with someone else and get every detail perfect without misremembering a single thing?
I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency and that the term AI is just completely overloaded right now.
I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
Confabulation has to do with degradation of biological processes and information storage.
There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).
The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.
Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.
I haven't seen any counter examples, so you may give some examples to start with.
Is it something we want to emulate?
It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.
If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.
I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.
Now imagine a high-skilled software engineer with dementia coding safety-critical software...
[0] https://www.medicalnewstoday.com/articles/confabulation-deme...
Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?
You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.
https://www.nature.com/articles/nrn2787
To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.
https://genai-showdown.specr.net/image-editing
There's been a lot of progress there, it's just that an LLM that's best for, say coding, isn't going to be also the best for image edit.
For an article five years in the making, this is what I expected it to be about. Instead, we got a ramble about how imperfect LLMs are right now.
I wager this is a point that needs beaten into the common psyche. After all, it's been sold that it is not an imperfect tool, but the solution to all of our problems in every field forever. That's why these companies need billions upon billions of dollars of public subsidies and investments that would otherwise find their way to more pragmatic ends.
https://archive.ph/I5cAE
I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).
The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.
(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)
An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).
Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.
Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.
This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.
Can you find an example and test it out?
Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.
I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.
Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.
I love that it ends with such a positive note, even though it's generally a critical article, at least it's well reasoned and not utterly hyping/dooming something.
Thanks yet again Kyle!
... I still think there is an interesting question to be investigated about whether, by building immensely complex models of language, one of our primary ways that we interact with, reason about and discuss the world, we may not have accidentally built something with properties quite different than might be guessed from the (otherwise excellent) description of how they work in TFA.
I agree with pretty much everything in TFA, so this is supplemental to the points made there, not contesting them or trying to replace them.
Doesn't it get boring?
I like using these models a lot more than I stand hearing people talk about them, pro or contra. Just slop about slop. And the discussions being artisanal slop really doesn't make them any better.
Every time I hear some variation of bullshitting or plagiarizing machines, my eyes roll over. Do these people think they're actually onto something? I've been seeing these talking points for literal years. For people who complain about no original thoughts, these sure are some tired ones.
When you see a pattern like this, you know that its not coming from any place of truth but rather ideology
I'd much rather read articles about what LLMs can/can't do, or stuff people have built with LLMs, than read how everything LLMs touch turns to shit.
It's true that people don't have a good intuitive sense of what the models are good or bad at (see: counting the Rs in "strawberry"), but this is more a human limitation than a fundamental problem with the technology.
I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.
The most intellectually honest way to evaluate these things is how they behave now on real tasks. Not with some unfalsifiable appeal to the future of "oh, they'll fix it."
This is a broad statement that assumes we agree on the purpose.
For my purpose, which is software development, the technology has reached a level that is entirely adequate.
Meanwhile, sports trivia represents a stress test of the model's memorized world knowledge. It could work really well if you give the model a tool to look up factual information in a structured database. But this is exactly what I meant above; using the technology in a suboptimal way is a human problem, not a model problem.
If the purpose is indeed software development with review, then there's nothing stopping multi-billion dollar companies from putting friction into these sytems to direct users towards where the system is at its strongest.
That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.
And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".
The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.
95% is not my experience and frankly dishonest.
I have ChatGPT open right now, can you give me examples where it doesn't work but some other source may have got it correct?
I have tested it against a lot of examples - it barely gets anything wrong with a text prompt that fits a few pages.
> The most intellectually honest way to evaluate these things is how they behave now on real tasks
A falsifiable way is to see how it is used in real life. There are loads of serious enterprise projects that are mostly done by LLMs. Almost all companies use AI. Either they are irresponsible or you are exaggerating.
Lets be actually intellectually honest here.
Quite frankly, this is exactly like how two people can use the same compression program on two different files and get vastly different compression ratios (because one has a lot of redundancy and the other one has not).
Fake content and lies. To drive outrage. To influence elections. To distract from real crimes. To overload everyone so they're too tired to fight or to understand. To weaken the concept that anything's true so that you can say anything. Because who cares if the world dies as long as you made lots of money on the way.
Guiding principle of the AI industry
Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)
Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.
> One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
Yes, there have been improvements on them, but none of those improvements mitigate the core flaw of the technology. The author even acknowledges all of the improvements in the last few months.
[1]: https://link.springer.com/article/10.1007/s10676-024-09775-5
[2]: https://en.wikipedia.org/wiki/On_Bullshit
https://philosophersmag.com/large-language-models-and-the-co...
This is true, but I prefer to think of it as "It's delusional to pretend as if human beings are not bullshit machines too".
Lies are all we have. Our internal monologue is almost 100% fantasy. Even in serious pursuits, that's how it works. We make shit up and lie to ourselves, and then only later apply our hard-earned[1] skill prompts to figure out whether or not we're right about it.
How many times have the nerds here been thinking through a great new idea for a design and how clever it would be before stopping to realize "Oh wait, that won't work because of XXX, which I forgot". That's a hallucination right there!
[1] Decades of education!
Being wrong is not the same as a hallucination. It's a natural step on a journey to being more right. This feels a bit like Andreesen proudly stating he avoids reflection - you can act like that, but the human brain doesn't have to. LLMs have no choice in the matter.
Models have gotten ridiculously better, they really have, but the scale has increased too, and I don't think we're ready to deal with the onslaught.
Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug. To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.
We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.
If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?
Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.
I’m still not a big fan of comparing humans and LLMs because LLMs lack so much of what actually makes us human. We might bullshit or be wrong because of many reasons that just don’t apply to LLMs.
Your no-true-scotsman clause basically falsifies that statement for me. Fine, LLMs are, at worst I guess, "non-thoughtful humans". But obviously LLMs are right an awful lot (more so than a typical human, even), and even the thoughtful make mistakes.
So yeah, to my eyes "Humans are NOT different" fits your argument better than your hypothesis.
(Also, just to be clear: LLMs also say "I don't know", all the time. They're just prompted to phrase it as a criticism of the question instead.)
If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.
On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself