NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The wall confronting large language models (arxiv.org)
measurablefunc 12 hours ago [-]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
Certhas 12 hours ago [-]
I don't understand what point you're hinting at.

Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space. So if you can implement it in a brain or a computer, there is a sufficiently large probabilistic dynamic that can model it. More really is different.

So I view all deductive ab-initio arguments about what LLMs can/can't do due to their architecture as fairly baseless.

(Note that the "large" here is doing a lot of heavy lifting. You need _really_ large. See https://en.m.wikipedia.org/wiki/Transfer_operator)

measurablefunc 12 hours ago [-]
What part about backtracking is baseless? Typical Prolog interpreters can be implemented in a few MBs of binary code (the high level specification is even simpler & can be in a few hundred KB)¹ but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome.

If you think there is a threshold at which point some large enough feedforward network develops the capability to backtrack then I'd like to see your argument for it.

¹https://en.wikipedia.org/wiki/Warren_Abstract_Machine

Certhas 58 minutes ago [-]
I know that if you go large enough you can do any finite computation using only fixed transition probabilities. This is a trivial observation. To repeat what I posted elsewhere in this thread:

Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states.

Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space.

Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality.

You _can_ continue this line of thought though in more productive directions. E.g. what if the input of your machine is genuinely uncertain? What if the transitions are not precise but slightly noisy? You'd expect that the fundamental capabilities of a noisy machine wouldn't be that much worse than those of a noiseless ones (over finite time horizons). What if the machine was built to be noise resistant in some way?

All of this should regularize the Markov chain above. If it's more regular you can start thinking about approximating it using a lower rank transition matrix.

The point of this is not to say that this is really useful. It's to say that there is no reason in my mind to dismiss the purely mathematical rewriting as entirely meaningless in practice.

skissane 9 hours ago [-]
> but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome.

The fundamental autoregressive architecture is absolutely capable of backtracking… we generate next token probabilities, select a next token, then calculate probabilities for the token thereafter.

There is absolutely nothing stopping you from “rewinding” to an earlier token, making a different selection and replaying from that point. The basic architecture absolutely supports it.

Why then has nobody implemented it? Maybe, this kind of backtracking isn’t really that useful.

versteegen 1 hours ago [-]
Yes, but anyway, LLMs themselves are perfectly capable of backtracking reasoning while sampling is run forwards only, in the same way humans do: by deciding something doesn't work and trying something else. Humans DON'T time travel backwards in time and never have the erroneous thought in the first place.
measurablefunc 9 hours ago [-]
Where is this spelled out formally and proven logically?
skissane 9 hours ago [-]
LLM backtracking is an active area of research, see e.g.

https://arxiv.org/html/2502.04404v1

https://arxiv.org/abs/2306.05426

And I was wrong that nobody has implemented it, as these papers prove people have… it is just the results haven’t been sufficiently impressive to support the transition from the research lab to industrial use - or at least, not yet

measurablefunc 8 hours ago [-]
> Empirical evaluations demonstrate that our proposal significantly enhances the reasoning capabilities of LLMs, achieving a performance gain of over 40% compared to the optimal-path supervised fine-tuning method.
afiori 6 hours ago [-]
I would expect to see something like this soonish as around now we are seeing the end of training scaling and the beginning of inference scaling
foota 5 hours ago [-]
This is a neat observation, training has been optimized to hell and inference is just beginning.
bondarchuk 12 hours ago [-]
Backtracking makes sense in a search context which is basically what prolog is. Why would you expect a next-token-predictor to do backtracking and what should that even look like?
PaulHoule 11 hours ago [-]
If you want general-purpose generation than it has to be able to respect constraints (e.g. figure art of a person has 0..1 belly buttons, 0..2 legs is unspoken) as it is generative models usually get those things right but don't always if they can stick together the tiles they use internally in some combination that makes sense locally but not globally.

General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking.

Today I had another of those experiences of the weaknesses of LLM reasoning, one that happens a lot when doing LLM-assisted coding. I was trying to figure out how to rebuild some CSS after the HTML changed for accessibility purposes and got a good idea for how to do it from talking to the LLM but at that point the context was poisoned, probably because there was a lot of content about the context describing what we were thinking about at different stages of the conversation which evolved considerably. It lost its ability to follow instructions and I'd tell it specifically to do this or do that and it just wouldn't do it properly and this happens a lot if a session goes on too long.

My guess is that the attention mechanism is locking on to parts of the conversation which are no longer relevant to where I think we're at and in general the logic that considers the variation of either a practice (instances) or a theory over time is a very tricky problem and 'backtracking' is a specific answer for maintaining your knowledge base across a search process.

photonthug 3 hours ago [-]
> General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking.

Just to add some more color to this. For problems that completely reduce to formal methods or have significant subcomponents that involve it, combinatorial explosion in state-space is a notorious problem and N variables is going to stick you with 2^N at least. It really doesn't matter whether you think you're directly looking at solving SAT/search, because it's too basic to really be avoided in general.

When people talk optimistically about hallucinations not being a problem, they generally mean something like "not a problem in the final step" because they hope they can evaluate/validate something there, but what about errors somewhere in the large middle? So even with a very tiny chance of hallucinations in general, we're talking about an exponential number of opportunities in implicit state-transitions to trigger those low-probability errors.

The answer to stuff like this is supposed to be "get LLMs to call out to SAT solvers". Fine, definitely moving from state-space to program-space is helpful, but it also kinda just pushes the problem around as long as the unconstrained code generation is still prone to hallucination.. what happens when it validates, runs, and answers.. but the spec was wrong?

Personally I'm most excited about projects like AlphaEvolve that seem fearless about hybrid symbolics / LLMs and embracing the good parts of GOFAI that LLMs can make tractable for the first time. Instead of the "reasoning is dead, long live messy incomprehensible vibes", those guys are talking about how to leverage earlier work, including things like genetic algorithms and things like knowledge-bases.[0] Especially with genuinely new knowledge-discovery from systems like this, I really don't get all the people who are still staunchly in either an old-school / new-school camp on this kind of thing.

[0]: MLST on the subject: https://www.youtube.com/watch?v=vC9nAosXrJw

XenophileJKO 10 hours ago [-]
What if you gave the model a tool to "willfully forget" a section of context. That would be easy to make. Hmm I might be onto something.
PaulHoule 10 hours ago [-]
I guess you could have some kind of mask that would let you suppress some of the context from matching, but my guess is that kind of thing might cause problems as often as it solves them.

Back when I was thinking about commonsense reasoning with logic it was obviously a much more difficult problem to add things like "P was true before time t", "there will be some time t in the future such at P is true", "John believes Mary believes that P is true", "It is possible that P is true", "there is some person q who believes that P is true", particularly when you combine these qualifiers. For one thing you don't even have a sound and complete strategy for reasoning over first-order logic + arithmetic but you also have a combinatorical explosion over the qualifiers.

Back in the day I thought it was important to have sound reasoning procedures but one of the reasons none of my foundation models ever became ChatGPT was that I cared about that and I really needed to ask "does change C cause an unsound procedure to get the right answer more often?" and not care if the reasoning procedure was sound or not.

measurablefunc 12 hours ago [-]
I don't expect a Markov chain to be capable of backtracking. That's the point I am making. Logical reasoning as it is implemented in Prolog interpreters is not something that can be done w/ LLMs regardless of the size of their weights, biases, & activation functions between the nodes in the graph.
Certhas 1 hours ago [-]
Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states.

Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space.

Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality.

bondarchuk 11 hours ago [-]
Imagine the context window contains A-B-C, C turns out a dead end and we want to backtrack to B and try another branch. Then the LLM could produce outputs such that the context window would become A-B-C-[backtrack-back-to-B-and-don't-do-C] which after some more tokens could become A-B-C-[backtrack-back-to-B-and-don't-do-C]-D. This would essentially be backtracking and I don't see why it would be inherently impossible for LLMs as long as the different branches fit in context.
measurablefunc 11 hours ago [-]
If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain.
Ukv 11 hours ago [-]
> If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain

Have each of the Markov chain's states be one of 10^81 possible sudoku grids (a 9x9 grid of digits 1-9 and blank), then calculate the 10^81-by-10^81 transition matrix that takes each incomplete grid to the valid complete grid containing the same numbers. If you want you could even have it fill one square at a time rather than jump right to the solution, though there's no need to.

Up to you what you do for ambiguous inputs (select one solution at random to give 1.0 probability in the transition matrix? equally weight valid solutions? have the states be sets of boards and map to set of all valid solutions?) and impossible inputs (map to itself? have the states be sets of boards and map to empty set?).

Could say that's "cheating" by pre-computing the answers and hard-coding them in a massive input-output lookup table, but to my understanding that's also the only sense in which there's equivalence between Markov chains and LLMs.

measurablefunc 11 hours ago [-]
There are multiple solutions for each incomplete grid so how are you calculating the transitions for a grid w/ a non-unique solution?

Edit: I see you added questions for the ambiguities but modulo those choices your solution will almost work b/c it is not extensionally equivalent entirely. The transition graph and solver are almost extensionally equivalent but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions.

Ukv 10 hours ago [-]
> but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions

If you want it to give all possible solutions at once, you can just expand the state space to the power-set of sudoku boards, such that the input board transitions to the state representing the set of valid solved boards.

measurablefunc 10 hours ago [-]
That still won't work b/c there is no backtracking. The point is that there is no way to encode backtracking/choice points like in Prolog w/ a Markov chain. The argument you have presented is not extensionally equivalent to the Prolog solver. It is almost equivalent but it's missing choice points for starting at a valid solution & backtracking to an incomplete board to generate a new one. The typical argument for absorbing states doesn't work b/c sudoku is not a typical deterministic puzzle.
Ukv 10 hours ago [-]
> That still won't work b/c there is no backtracking.

It's essentially just a lookup table mapping from input board to the set of valid output boards - there's no real way for it not to work (obviously not practical though). If board A has valid solutions B, C, D, then the transition matrix cell mapping {A} to {B, C, D} is 1.0, and all other entries in that row are 0.0.

> The point is that there is no way to encode backtracking/choice points

You can if you want, keeping the same variables as a regular sudoku solver as part of the Markov chain's state and transitioning instruction-by-instruction, rather than mapping directly to the solution - just that there's no particular need to when you've precomputed the solution.

measurablefunc 10 hours ago [-]
My point is that your initial argument was missing several key pieces & if you specify the entire state space you will see that it's not as simple as you thought initially. I'm not saying it can't be done but that it's actually much more complicated than simply saying just take an incomplete board state s & uniform transitions between s, s' for valid solutions s' that are compatible with s. In fact, now that I spelled out the issues I still don't think this is a formal extensional equivalence. Prolog has interactive transitions between the states & it tracks choice points so compiling a sudoku solver to a Markov chain requires more than just tracking the board state in the context.
Ukv 9 hours ago [-]
> My point is that your initial argument was missing several key pieces

My initial example was a response to "If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain", describing how a Sudoku solver could be implemented as a Markov chain. I don't think there's anything missing from it - it solves all proper Sudokus, and I only left open the choice of how to handle improper Sudokus because that was unspecified (but trivial regardless of what's wanted).

> I'm not saying it can't be done but that it's actually much more complicated

If that's the case, then I did misinterpret your comments as saying it can't be done. But, I don't think it's really complicated regardless of whatever "ok but now it must encode choice points in its state" are thrown at it - it's just a state-to-state transition look-up table.

> so compiling a sudoku solver to a Markov chain requires more than just tracking the board state in the context.

As noted, you can keep all the same variables as a regular Sudoku solver as part of the Markov chain's state and transition instruction-by-instruction, if that's what you want.

If you mean inputs from a user, the same is true of LLMs which are typically ran interactively. Either model the whole universe including the user as part of state transition table (maybe impossible, depending on your beliefs about the universe), or have user interaction take the current state, modify it, and use it as initial state for a new run of the Markov chain.

measurablefunc 9 hours ago [-]
> As noted, you can keep all the same variables as a regular Sudoku solver

What are those variables exactly?

Ukv 8 hours ago [-]
For a depth-first solution (backtracking), I'd assume mostly just the partial solutions and a few small counters/indices/masks - like for tracking the cell we're up to and which cells were prefilled. Specifics will depend on the solver, but can be made part of Markov chain's state regardless.
10 hours ago [-]
bboygravity 11 hours ago [-]
The LLM can just write the Prolog and solve the sudoku that way. I don't get your point. LLMs like Grok 4 can probably one-shot this today with the current state of art. You can likely just ask it to solve any sudoku and it will do it (by writing code in the background and running it and returning the result). And this is still very early stage compared to what will be out a year from now.

Why does it matter how it does it or whether this is strictly LLM or LLM with tools for any practical purpose?

PhunkyPhil 6 hours ago [-]
The point isn't if the output is correct or not, it's if the actual net is doing "logical computation" ala Prolog.

What you're suggesting is akin to me saying you can't build a house, then you go and hire someone to build a house. _You_ didn't build the house.

kaibee 1 hours ago [-]
I feel like you're kinda proving too much. By the same reasoning, humans/programmers aren't generally intelligent either, because we can only mentally simulate relatively small state spaces of programs, and when my boss tells me to go build a tool, I'm not exactly writing raw x86 assembly. I didn't _build_ the tool, I just wrote text that instructed a compiler how to build the tool. Like the whole reason we invented SAT solvers is because we're not smart in that way. But I feel like you're trying to argue that LLMs at any scale gonna be less capable than an average person?
lelanthran 10 hours ago [-]
> If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain.

I think it can be done. I started a chatbot that works like this some time back (2024) but paused work on it since January.

In brief, you shorten the context by discarding the context that didn't work out.

sudosysgen 11 hours ago [-]
You can do that pretty trivially for any fixed size problem (as in solvable with a fixed-sized tape Turing machine), you'll just have a titanically huge state space. The claim of the LLM folks is that the models have a huge state space (they do have a titanically huge state space) and can navigate it efficiently.

Simply have a deterministic Markov chain where each state is a possible value of the tape+state of the TM and which transitions accordingly.

measurablefunc 11 hours ago [-]
How are you encoding the state spaces for the sudoku solver specifically?
vidarh 10 hours ago [-]
A (2,3) Turing machine can be trivially implemented with a loop around an LLM that treats the context as an IO channel, and a Prolog interpreter runs on a Turing complete computer, and so per Truing equivalence you can run a Prolog interpreter on an LLM.

Of course this would be pointless, but it demonstrates that a system where an LLM provides the logic can backtrack, as there's nothing computationally special about backtracking.

That current UIs to LLMs are set up for conversation-style use that makes this harder isn't an inherent limitation of what we can do with LLMs.

measurablefunc 10 hours ago [-]
Loop around an LLM is not an LLM.
vidarh 10 hours ago [-]
Then no current systems you are using are LLMs
measurablefunc 10 hours ago [-]
Choice-free feedforward graphs are LLMs. The inputs/outputs are extensionally equivalent to context and transition probabilities of a Markov chain. What exactly is your argument b/c what it looks like to me is you're simply making a Turing tarpit argument which does not address any of my points.
vidarh 9 hours ago [-]
My argument is that artificially limiting what you argue about to a subset of the systems people are actually using and arguing about the limitations of that makes your argument irrelevant to what people are actually using.
8 hours ago [-]
baselessness 2 hours ago [-]
That's what this debate has been reduced to. People point out the logical and empirical, by now very obvious limitation of LLMs. And boosters are the equivalent of Chopra's "quantum physics means anything is possible" saying "if you add enough information to a system anything is possible".
arduanika 12 hours ago [-]
What hinting? The comment was very clear. Arbitrarily good approximation is different from symbolic understanding.

"if you can implement it in a brain"

But we didn't. You have no idea how a brain works. Neither does anyone.

mallowdram 12 hours ago [-]
We know the healthy brain is unpredictable. We suspect error minimization and prediction are not central tenets. We know the brain creates memory via differences in sharp wave ripples. That it's oscillatory. That it neither uses symbols nor represents. That words are wholly external to what we call thought. The authors deal with molecules which are neither arbitrary nor specific. Yet tumors ARE specific, while words are wholly arbitrary. Knowing these things should offer a deep suspicion of ML/LLMs. They have so little to do with how brains work and the units brains actually use (all oscillation is specific, all stats emerge from arbitrary symbols and worse: metaphors) that mistaking LLMs for reasoning/inference is less lexemic hallucination and more eugenic.
Zigurd 11 hours ago [-]
"That words are wholly external to what we call thought." may be what we should learn, or at least hypothesize, based on what we see LLMs doing. I'm disappointed that AI isn't more of a laboratory for understanding brain architecture, and precisely what is this thing called thought.
mallowdram 10 hours ago [-]
The question is how to model the irreducible. And then to concatenate between spatiotemporal neuroscience (the oscillators) and neural syntax (what's oscillating) and add or subtract what the fields are doing to bind that to the surroundings.
quantummagic 11 hours ago [-]
What do you think about the idea that LLMs are not reasoning/inferring, but are rather an approximation of the result? Just like you yourself might have to spend some effort reasoning, on how a plant grows, in order to answer questions about that subject. When asked, you wouldn't replicate that reasoning, instead you would recall the crystallized representation of the knowledge you accumulated while previously reasoning/learning. The "thinking" in the process isn't modelled by the LLM data, but rather by the code/strategies used to iterate over this crystallized knowledge, and present it to the user.
mallowdram 10 hours ago [-]
This is toughest part. We need some kind of analog external that concatenates. It's software, but not necessarily binary, it uses topology to express that analog. It somehow is visual, ie you can see it, but at the same time, it can be expanded specifically into syntax, which the details of are invisible. Scale invariance is probably key.
suddenlybananas 1 hours ago [-]
We don't know those things about the brain. I don't know why you keep going around HN making wildly false claims about the state of contemporary neuroscience. We know very very little about how higher order cognition works in the brain.
Certhas 12 hours ago [-]
We didn't but somebody did so it's possible so probabilistic dynamics in high enough dimensions can do it.

We don't understand what LLMs are doing. You can't go from understanding what a transformer is to understanding what an LLM does any more than you can go from understanding what a Neuron is to what a brain does.

jjgreen 11 hours ago [-]
You can look at it, from the inside.
patrick451 4 hours ago [-]
> Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space.

This is impossible. When driven by a sinusoid, a linear system will only ever output a sinusoid with exactly the same frequency but a different amplitude and phase regardless of how many states you give it. A non-linear system can change the frequency or output multiple frequencies.

diffeomorphism 3 hours ago [-]
As far as I understand, the terminology says "linear" but means compositions of affine (with cutoffs etc). That gives you arbitrary polynomials and piecewise affine, which are dense in most classes of interest.

Of course, in practice you don't actually get arbitrary degree polynomials but some finite degree, so the approximation might still be quite bad or inefficient.

awesome_dude 12 hours ago [-]
I think that the difference can be best explained thus:

I guess that you are most likely going to have cereal for breakfast tomorrow, I also guess that it's because it's your favourite.

vs

I understand that you don't like cereal for breakfast, and I understand that you only have it every day because a Dr told you that it was the only way for you to start the day in a way that aligns with your health and dietary needs.

Meaning, I can guess based on past behaviour and be right, but understanding the reasoning for those choices, that's a whole other ballgame. Further, if we do end up with an AI that actually understands, well, that would really open up creativity, and problem solving.

quantummagic 11 hours ago [-]
How are the two cases you present fundamentally different? Aren't they both the same _type_ of knowledge? Why do you attribute "true understanding" to the case of knowing what the Dr said? Why stop there? Isn't true understanding knowing why we trust what the doctor said (all those years of schooling, and a presumption of competence, etc)? And why stop there? Why do we value years of schooling? Understanding, can always be taken to a deeper level, but does that mean we didn't "truly" understand earlier? And aren't the data structures needed to encode the knowledge, exactly the same for both cases you presented?
awesome_dude 10 hours ago [-]
When you ask that question, why don't you just use a corpus of the previous answers to get some result?

Why do you need to ask me, isn't a guess based on past answers good enough?

Or, do you understand that you need to know more, you need to understand the reasoning based on what's missing from that post?

quantummagic 9 hours ago [-]
I asked that question in an attempt to not sound too argumentative. It was rhetorical. I'm asking you to consider the fact that there isn't actually any difference between the two examples you provided. They're fundamentally the same type of knowledge. They can be represented by the same data structures.

There's _always_ something missing, left unsaid in every example, it's the nature of language.

As for your example, the LLM can be trained to know the underlying reasons (doctor's recommendation, etc.). That knowledge is not fundamentally different from the knowledge that someone tends to eat cereal for breakfast. My question to you, was an attempt to highlight that the dichotomy you were drawing, in your example, doesn't actually exist.

awesome_dude 8 hours ago [-]
> They're fundamentally the same type of knowledge. They can be represented by the same data structures.

Maybe, maybe one is based on correlation, the other causation.

quantummagic 7 hours ago [-]
What if the causation had simply been that he enjoyed cereal for breakfast?

In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do.

On top of which, even if you think the "cause" is that the doctor told him to eat a healthy diet, do you really know the actual cause? Maybe the real cause, is that the girl he fancies, told him he's not in good enough shape. The doctor telling him how to get in shape is only a correlation, the real cause is his desire to win the girl.

These connections are vast and deep, but they're all essentially the same type of knowledge, representable by the same data structures.

awesome_dude 5 hours ago [-]
> In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do.

Yeah, no.

Understanding the causation allows the system to provide a better answer.

If they "enjoy" cereal, what about it do they enjoy, and what other possible things can be had for breakfast that also satisfy that enjoyment.

You'll never find that by looking only at the fact that they have eaten cereal for breakfast.

And the fact that that's not obvious to you is why I cannot be bothered going into any more depth on the topic any more. It's clear that you don't have any understanding on the topic beyond a superficial glance.

Bye :)

Anon84 11 hours ago [-]
There definitely is, but Marcus is not the only one talking about it. For example, we covered this paper in one of our internal journal clubs a few weeks ago: https://arxiv.org/abs/2410.02724
godelski 6 hours ago [-]
I just want to highlight this comment and stress how big of a field ML actually is. I think even much bigger than most people in ML research even know. It's really unfortunate that the hype has grown so much that even in the research community these areas are being overshadowed and even dismissed[0]. It's been interesting watching this evolution and how we're reapproaching symbolic reasoning while avoiding that phrase.

There's lots of people doing theory in ML and a lot of these people are making strides which others stand on (ViT and DDPM are great examples of this). But I never expect these works to get into the public eye as the barrier to entry tends to be much higher[1]. But they certainly should be something more ML researchers are looking at.

That is to say: Marcus is far from alone. He's just loud

[0] I'll never let go how Yi Tay said "fuck theorists" and just spent his time on Twitter calling the KAN paper garbage instead of making any actual critique. There seems to be too many who are happy to let the black box remain a black box because low level research has yet to accumulate to the point it can fully explain an LLM.

[1] You get tons of comments like this (the math being referenced is pretty basic, comparatively. Even if more advanced than what most people are familiar with) https://news.ycombinator.com/item?id=45052227

calf 3 hours ago [-]
I hazard to imagine that LLMs are a special subset of Markov chains, and this subset has interesting properties; it seems a bit reductive to dismiss LLMs as "merely' Markov chains. It's what we can do with this unusual subset (e.g. maybe incorporate in a larger AI system) that is the interesting question.
measurablefunc 2 hours ago [-]
You don't have to imagine, there is a logically rigorous argument¹ that establishes the equivalence. There is also nothing unusual about neural networks or Markov chains. You've just been mystified by the marketing around them so you think there is something special about them when they're just another algorithm for approximating different kinds of compressible signals & observations about the real world.

¹https://markov.dk.workers.dev/

bubblyworld 3 hours ago [-]
If you want to understand SOTA systems then I don't think you should study their formal properties in isolation, i.e. it's not useful to separate them from their environment. Every LLM-based tool has access to code interpreters these days which makes this kind of a moot point.
measurablefunc 3 hours ago [-]
I prefer logic to hype. If you have a reason to think the hype nullifies basic logical analysis then you're welcome to your opinion but I'm going to stick w/ logic b/c so far no one has presented an actual counter-argument w/ enough rigor to justify their stance.
bubblyworld 3 hours ago [-]
I think you are applying logic and demand for rigour selectively, to be honest. Not all arguments require formalisation. I have presented mine - your linked logical analyses just aren't relevant to modern systems. I said nothing about the logical steps being wrong, necessarily.
wolvesechoes 1 hours ago [-]
> I have presented mine - your linked logical analyses just aren't relevant to modern systems

Assertion is not an argument

bubblyworld 34 minutes ago [-]
That assertion is not what I was referring to. Anyway, I'm not really interested in nitpicking this stuff. Engage with my initial comment if you actually care to discuss it.
measurablefunc 2 hours ago [-]
If there are no logical errors then you're just waving your hands which, again, you're welcome to do but it doesn't address any of the points I've made in this thread.
bubblyworld 2 hours ago [-]
Lol, okay. Serves me right for feeding the trolls.
vidarh 10 hours ago [-]
> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

11101010001100 8 hours ago [-]
So we just need a lot of monkeys at computers?
sieabahlpark 10 hours ago [-]
[dead]
godelski 6 hours ago [-]

  > you can trivially wire an LLM into a Turing complete system
Please don't do the "the proof is trivial and left to the reader"[0].

If it is so trivial, show it. Don't hand wave, "put up or shut up". I think if you work this out you'll find it isn't so trivial...

I'm aware of some works but at least every one I know of has limitations that would not apply to LLMs. Plus, none of those are so trivial...

[0] https://en.wikipedia.org/wiki/Proof_by_intimidation

jules 11 hours ago [-]
What does this predict about LLMs ability to win gold at the International Mathematical Olympiad?
measurablefunc 11 hours ago [-]
Same thing it does about their ability to drive cars.
jules 5 hours ago [-]
So, nothing.
measurablefunc 5 hours ago [-]
It's definitely something but it might not be apparent to those who do not understand the distinctions between intensionallity & extensionallity.
godelski 6 hours ago [-]
Depends which question you're asking.

Ability to win a gold medal as if they were scored similarly to how humans are scored?

or

Ability to win a gold medal as determined by getting the "correct answer" to all the questions?

These are subtly two very different questions. In these kinds of math exams how you get to the answer matters more than the answer itself. i.e. You could not get high marks through divination. To add some clarity, the latter would be like testing someone's ability to code by only looking at their results to some test functions (oh wait... that's how we evaluate LLMs...). It's a good signal but it is far from a complete answer. It very much matters how the code generates the answer. Certainly you wouldn't accept code if it does a bunch of random computations before divining an answer.

The paper's answer to your question (assuming scored similarly to humans) is "Don’t count on it". Not a definitive "no" but they strongly suspect not.

jules 6 hours ago [-]
The type of reasoning by the OP and the linked paper obviously does not work. The observable reality is that LLMs can do mathematical reasoning. A cursory interaction with state of the art LLMs makes this evident, as does their IMO gold medal scored like humans are. You cannot counter observable reality with generic theoretical considerations about Markov chains or pretraining scaling laws or floating point precision. The irony is that LLMs can explain why that type of reasoning is faulty:

> Any discrete-time computation (including backtracking search) becomes Markov if you define the state as the full machine configuration. Thus “Markov ⇒ no reasoning/backtracking” is a non sequitur. Moreover, LLMs can simulate backtracking in their reasoning chains. -- GPT-5

tim333 10 hours ago [-]
Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved.

GPT5 said it thinks it's fixable when I asked it:

>Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches.

afiori 6 hours ago [-]
I sorta agree with you, but replying to "LLM can't reason" with "an LLM says they do" is wild
JohnKemeny 2 hours ago [-]
I asked ChatGPT and it agrees with the statement that it is indeed wild
wolvesechoes 1 hours ago [-]
And I though that the gap is bridged by giving another billions to Sam Altman
boznz 12 hours ago [-]
logical reasoning is also based on probability weights, most of the time that probability is so close to 100% that it can be assumed to be true without consequence.
AaronAPU 11 hours ago [-]
Stunningly, though I have been saying this for 20 years I’ve never come across someone else mention it until now.
logicchains 12 hours ago [-]
LLMs are not formally equivalent to Markov chains, they're more powerful; transformers with sufficient chain of thought can solve any problem in P: https://arxiv.org/abs/2310.07923.
measurablefunc 12 hours ago [-]
If you think there is a mistake in this argument then I'd like to know where it is: https://markov.dk.workers.dev/.
logicchains 2 hours ago [-]
It assumes the LLM only runs once, i.e. it doesn't account for chain of thought, which makes the program not memoryless.
measurablefunc 2 hours ago [-]
There is no such assumption. You can run/sample the LLM & the equivalent Markov chain as many times as you want & the logical analysis remains the same b/c the extensional equivalence between the LLM & Markov chain has nothing to do w/ how many times the trajectories are sampled from each one.
CamperBob2 6 hours ago [-]
A Markov chain is memoryless by definition. A language model has a context, not to mention state in the form of the transformer's KV store.

The whole analogy is just pointless. You might as well call an elephant an Escalade because they weigh the same.

measurablefunc 5 hours ago [-]
Where is the logical mistake in the linked argument? If there is a mistake then I'd like to know what it is & the counter-example that invalidates the logical argument.
versteegen 58 minutes ago [-]
A Transformer with a length n context window implements an order 2n-1 Markov chain¹. That is correct. That is also irrelevant in the real world, because LLMs aren't run for that many tokens (as results are bad). Before it hits that limit, there is nothing requiring it to have any of the properties of a Markov chain. In fact, because the state space is k^n (alphabet size k), you might not revisit a state until generating k^n tokens.

¹ Depending on context window implementation details, but that is the maximum, because the states n tokens back were computed from the n tokens before that. The minimum of course is an order n-1 Markov chain.

3 hours ago [-]
3 hours ago [-]
lowbloodsugar 4 hours ago [-]
1. You are a neural net and you can backtrack. But unlike an algorithm space search, you’lol go “hmm. That doesn’t look right. Let me try it another way. “

2. Agentic AI already does this in the way that you do it.

6 hours ago [-]
Straw 7 hours ago [-]
This is utter nonsense.

There's a formal equivalence between Markov chains and literally any system. The entire world can be viewed as a Markov chain. This doesn't tell you anything of interest, just that if you expand state without bound you eventually get the Markov property.

Why can't an LLM do backtracking? Not only within its multiple layers but across token models as reasoning models already do.

You are a probabilistic generative model (If you object, all of quantum mechanics is). I guess that means you can't do any reasoning!

Animats 10 hours ago [-]
That article is weird. They seem obsessed with nuclear reactors. Also, they misunderstand how floating point works.

As one learns at high school, the continuous derivative is the limit of the discrete version as the displacement h is sent to zero. If our computers could afford infinite precision, this statement would be equally good in practice as it is in continuum mathematics. But no computer can afford infinite precision, in fact, the standard double-precision IEEE representation of floating numbers offers an accuracy around the 16th digit, meaning that numbers below 10−16 are basically treated as pure noise. This means that upon sending the displacement h below machine precision, the discrete derivatives start to diverge from the continuum value as roundoff errors then dominate the discretization errors.

Yes, differentiating data has a noise problem. This is where gradient followers sometimes get stuck. A low pass filter can help by smoothing the data so the derivatives are less noisy. But is that relevant to LLMs? A big insight in machine learning optimization was that, in a high dimensional space, there's usually some dimension with a significant signal, which gets you out of local minima. Most machine learning is in high dimensional spaces but with low resolution data points.

JohnKemeny 2 hours ago [-]
They are physicists and will therefore talk about things that make sense to them. What they aren't: computational linguists and deep learning experts.
godelski 7 hours ago [-]

  > A big insight in machine learning optimization was that
I think the big insight was how useful this low order method still is. I think many people don't appreciate how new the study of high dimensional mathematics (let alone high dimensional statistics) actually is. I mean metric theory didn't really start till around the early 1900's. The big reason these systems are still mostly black boxes is because we still have a long way to go when it comes to understanding these spaces.

But I think it is worth mentioning that low order approximations can still lock you out of different optima. While I agree the (Latent) Manifold Hypothesis pretty likely applies to many problems, this doesn't change the fact that even at relatively low dimensions (like 10D) are quite complex and have lots of properties that are unintuitive. With topics like language and images, I think it is safe to say that these still require operating in high dimensions. You're still going to have to contend with the complexities of the concentration of measure (an idea from the 70's).

Still, I don't think anyone expected things to have worked out as well as they have. If anything I think it is more surprising we haven't run into issues earlier! I think there are still some pretty grand problems for AI/ML left. Personally this is why I push back against much of the hype. The hype machine is good if the end is in sight. But a hype machine creates a bubble. The gamble is if you call fill the bubble before it pops. But the risk is that if it pops before then, then it all comes crashing down. It's been a very hot summer but I'm worried that the hype will lead to a winter. I'd rather have had a longer summer than a hotter summer and a winter.

hatmanstack 10 hours ago [-]
Have no empirical feedback but subjectively it reads as though the authors are trying to proof their own intelligence through convolution and confusion. Pure AI slop IMHO.
pama 2 hours ago [-]
Sauro, if you read this, please refrain from such low-content speculative statements:

“On a loose but telling note, this is still three decades short of the number of neural connections in the human brain, 1015, and yet they consume some one hundred million times more power (GWatts as compared to the very modest 20 Watts required by our brains).”

No human brain could have time to read all the materials of a modern LLM training run even if they lived and read eight hours a day since humans first appeared over 300,000 years ago. More to the point, inference of an LLM is way more energy efficient than human inference (see the energy costs of a B200 decoding a 671B parameter model and estimate the energy needed to write the equivalent of a human book worth of information as part of a larger batch). The main reason for the large energy costs of inference is that we are serving hundreds of millions of people with the same model. No humans have this type of scaling capability.

wolvesechoes 1 hours ago [-]
I didn't have to read all textbooks, web articles or blog posts about numerical methods, yet I am capable of implementing production-ready ODE solver, and LLMs are not (I use this example as this is what I experienced). Clearly human supremacy.
vrighter 2 hours ago [-]
And yet, the human brain is still (way way wayyyyyyyyy) more capable than the LLMs at the actual thinking. They're as wide as an ocean and as shallow as a puddle in a pothole. And we didn't need to read all of the internet to do it.

As for the "write a book" part, the LLM will write a book quickly sure, but a significant chunk of it will be bullshit. It will all be hallucinated, but the stopped clock will be right some of the time.

No humans have this scaling capability? What do you call the reproductive cycle then? Lots of smaller brains, each one possible specialized in a few fields, together containing all of human knowledge. And you might say that's not the same thing!, to which I reply with "let's not kid ourselves, Mixture-of-Experts describes exactly this".

throwaway314155 2 hours ago [-]
Agency may be better understood as Michael Levin's approach where e.g. a lifeform is something that can achieve the same goal using various methods (robust).

Having said that, you can now simply move the goal posts to say that while one human cannot read that much in that amount of time - the collective of all humans certainly can - or at least they can approximate it in a similar fashion to LLM's.

Since each of us can reap the benefits of the collective then the benefits are distributed back to the individuals as needed.

awanderingmind 1 hours ago [-]
There is a lot of focus in the comments on the authors' credentials and, apparently, their writing style. It is a pity, because I think their discussion of scaling is interesting, even if comparing LLMs to grid-based differential equation solvers might be unconventional (I haven't convinced myself whether the analogy is entirely apt/valid yet, but it could conceivably be).
Scene_Cast2 15 hours ago [-]
The paper is hard to read. There is no concrete worked-through example, the prose is over the top, and the equations don't really help. I can't make head or tail of this paper.
lumost 15 hours ago [-]
This appears to be a position paper written by authors outside of their core field. The presentation of "the wall" is only through analogy to derivatives on the discrete values computer's operate in.
jibal 13 hours ago [-]
If you look at their other papers, you will see that this is very much within their core field.
lumost 13 hours ago [-]
Their other papers are on simulation and applied chemistry. Where does their expertise in Machine Learning, or Large Language Models derive from?

While it's not a requirement to have published in a field before publishing in a field. Having a coauthor who is from the target field or a peer review venue in that field as an entry point certainly raises credibility.

From my limited claim to be in either Machine Learning or Large Language Models the paper does not appear to demonstrate what it claims. The author's language addresses the field of Machine Learning and LLM development as you would a young student - which does not help make their point.

JohnKemeny 12 hours ago [-]
He's a chemist. Lots of chemists and physicists like to talk about computation without having any background in it.

I'm not saying anything about the content, merely making a remark.

chermi 11 hours ago [-]
You're really not saying anything? Just a random remark with no bearing?

Seth Lloyd, Wolpert, Landauer, Bennet, Fredkin, Feynman, Sejnowski, Hopfield, Zechinna, parisi,mezard, and zdebvora, Crutchfeld, Preskill, Deutsch, Manin, Szilard, MacKay....

I wish someone told them to shut up about computing. And I wouldn't dare claim von Neumann as merely a physicist, but that's where he was coming from. Oh and as much as I dislike him, Wolfram.

JohnKemeny 2 hours ago [-]
As you note, some physicists do have computing backgrounds. I'm not suggesting they can't do computer science.

But today, most people hold opinions about LLMs, both as to their limits and their potential, without any real knowledge of computational linguistics nor of deep learning.

11101010001100 8 hours ago [-]
Succi is no slouch; hardcore multiscale physics guy, among other things.
godelski 7 hours ago [-]

  > Lots of chemists and physicists like to talk about computation without having any background in it.
I'm confused. Physicists deal with computation all the time. Are you confusing computation with programming? There's a big difference. Physicists and chemists are frequently at odds with the limits of computability. Remember, Turing, Church, and even Knuth obtained degrees in mathematics. The divide isn't so clear cut and there's lots of overlaps. I think if you go look at someone doing their PhD in Programming Languages you could easily be mistake them for a mathematician.

Looking at the authors I don't see why this is out of their domain. Succi[0] looks like he deals a lot with fluid dynamics and has a big focus on Lattice Boltzmann. Modern fluid dynamics is all about computability and its limits. There's a lot of this that goes into the Navier–Stokes problem (even Terry Tao talks about this[1]), which is a lot about computational reproducibility.

Coveney[2] is a harder read for me, but doesn't seem suspect. Lots of work in molecular dynamics, so shares a lot of tools with Succi (seems like they like to work together too). There's a lot of papers there, but sorting by year there's quite a few that scream "limits of computability" to me.

I can't make strong comments without more intimate knowledge of their work, but nothing here is a clear red flag. I think you're misinterpreting because this is a position paper, written in the style you'd expect from a more formal field, but also is kinda scatterd. I've only done a quick read, -- don't get me wrong, I have critiques -- but there's no red flags that warrant quick dismissal. (My background: physicist -> computational physics -> ML) There's things they are pointing to that are more discussed within the more mathematically inclined sides of ML (it's a big field... even if only a small subset are most visible). I'll at least look at some of their other works on the topic as it seems they've written a few papers.

[0] https://scholar.google.com/citations?user=XrI0ffIAAAAJ

[1] I suspect this well above the average HN reader, but pay attention to what they mean by "blowup" and "singularity" https://terrytao.wordpress.com/tag/navier-stokes-equations/

[2] https://scholar.google.com/citations?user=_G6FZ6YAAAAJ

JohnKemeny 2 hours ago [-]
Turing, Church, and even Knuth got their degrees before CS was an academic discipline. At least I don't think Turing studied Turing machines in his undergrads.

I'm saying that lots of people like to post their opinions of LLMs regardless of whether or not they actually have any competence in either computational linguistics or deep learning.

calf 3 hours ago [-]
There are some good example posts on Scott Aaronson's blog where he eviscerated shoddy physicists' take on quantum complexity theory. Physicists today aren't like Turing et al, most never picked up a theory of computer science book and actually worked through the homework exercises, and with AI pivot and paper spawning, this is kind of a general problem (arguably more interdisciplinary expertise is needed but people need to actually take the time to learn material and internalize it without making sophomore mistakes etc.).
JohnKemeny 2 hours ago [-]
Look at their actual papers before making a comment of what is or isn't their core field: https://dblp.org/pid/35/3081.html
joe_the_user 14 hours ago [-]
Paper seems to involve a series of analogies and equations. However, I think if the equations accepted, the "wall" is actually derived.

The authors are computer scientists and people who work with large scale dynamic system. They aren't people who've actually produced an industry-scale LLM. However, I have to note that despite lots of practical progress in deep learning/transformers/etc systems, all the theory involved just analogies and equations of a similar sort, it's all alchemy and so people really good at producing these models seem to be using a bunch of effective rules of thumb and not any full or established models (despite books claiming to offer a mathematical foundation for enterprise, etc).

Which is to say, "outside of core competence" doesn't mean as much as it would for medicine or something.

ACCount37 13 hours ago [-]
No, that's all the more reason to distrust major, unverified claims made by someone "outside of core competence".

Applied demon summoning is ruled by empiricism and experimentation. The best summoners in the field are the ones who have a lot of practical experience and a sharp, honed intuition for the bizarre dynamics of the summoning process. And even those very summoners, specialists worth their weight in gold, are slaves to the experiment! Their novel ideas and methods and refinements still fail more often than they succeed!

One of the first lessons you have to learn in the field is that of humility. That your "novel ideas" and "brilliant insights" are neither novel nor brilliant - and the only path to success lies through things small and testable, most of which do not survive the test.

With that, can you trust the demon summoning knowledge of someone who has never drawn a summoning diagram?

jibal 13 hours ago [-]
Somehow the game of telephone took us from "outside of their core field" (which wasn't true) to "outside of core competence" (which is grossly untrue).

> One of the first lessons you have to learn in the field is that of humility.

I suggest then that you make your statements less confidently.

cwmoore 12 hours ago [-]
Your passions may have run away with you.

https://news.ycombinator.com/item?id=45114753

ForHackernews 12 hours ago [-]
The freshly-summoned Gaap-5 was rumored to be the most accursed spirit ever witnessed by mankind, but so far it seems not dramatically more evil than previous demons, despite having been fed vastly more humans souls.
lazide 12 hours ago [-]
Perhaps we’re reaching peak demon?
14 hours ago [-]
joeldg 12 hours ago [-]
[dead]
klawed 12 hours ago [-]
> avoidance, which we also discuss in this paper, necessitates putting a much higher premium on insight and understanding of the structural characteristics of the problems being investigated.

I wonder if the authors are aware of The Bitter Lesson

phoenixhaber 6 hours ago [-]
I don't get it. Explain in layman's terms please? Without getting involved with the math which looks quite complicated it appears that they simply are assuming scaling with what is currently known without model improvements.
CuriouslyC 11 hours ago [-]
This article is accurate. That's why I'm investigating a bayesian symbolic lisp reasoner. It's incapable of hallucinating, it provides auditable traces which are actual programs and it kicks the crap out of LLMs at stuff like Arc-Agi, symbolic reasoning, logic programs, game playing, etc. I'm working on a paper where I show that the same model can break 80 on arc-agi, run the house by counting cards at blackjack, and solve complex mathematical word problems.
leptons 10 hours ago [-]
LLMs are also incapable of "hallucinating", so maybe that isn't the buzzword you should be using.
18cmdick 14 hours ago [-]
Grifters in shambles.
dcre 13 hours ago [-]
Always fun to see a theoretical argument that something clearly already happening is impossible.
ahartmetz 12 hours ago [-]
So where are the recent improvements in LLMs proportional to the billions invested?
dcre 12 hours ago [-]
Value for the money is not at issue in the paper!
ahartmetz 12 hours ago [-]
I believe it is. They are saying that LLMs don't improve all that much from giving them more resources - and computing power (and input corpus size) is pretty proportional to money.
42lux 11 hours ago [-]
It's not about value it's about the stagnation while throwing compute at the problem.
dcre 11 hours ago [-]
Exactly.
crowbahr 12 hours ago [-]
Really? It sure seems like we're at the top of the S curve with LLMs. Wiring them up to talk the themselves as reasoning isn't scaling the core models, which have only made incremental gains for all the billions invested.

There's plenty more room to grow with agents and tooling, but the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23.

dangus 5 hours ago [-]
And relevant to the summary of this paper, LLM incremental improvement doesn't really seem to include the described wall.

If work produced by LLMs forever has to be checked for accuracy, the applicability will be limited.

This is perhaps analogous to all the "self-driving cars" that still have to be monitored by humans, and in that case the self-driving system might as well not exist at all.

EMM_386 10 hours ago [-]
> the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23

From Anthropic's press release yesterday after raising another $13 billion:

"Anthropic has seen rapid growth since the launch of Claude in March 2023. At the beginning of 2025, less than two years after launch, Anthropic’s run-rate revenue had grown to approximately $1 billion. By August 2025, just eight months later, our run-rate revenue reached over $5 billion—making Anthropic one of the fastest-growing technology companies in history."

$4 billion increase in 8 months. $1 billion every two months.

dcre 9 hours ago [-]
They’re talking about model quality. I still think they’re wrong, but the revenue is only indirectly relevant.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:52:12 GMT+0000 (Coordinated Universal Time) with Vercel.