NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Learning from context is harder than we thought (hy.tencent.com)
bradfa 1 hours ago [-]
The key seems to be that you take the transcript of a model working within a problem domain that it’s not yet good at or where the context doesn’t match it’s original training and then you continually retrain it based on its efforts and guidance from a human or other expert. You end up with a specialty model in a given domain that keeps getting better at that domain, just like a human.

The hard part is likely when someone proves some “fact” which the models knows and has had reinforced by this training is no longer true. The model will take time to “come around” to understand this new situation. But this isn’t unlike the general populous. At scale humans accept new things slowly.

emporas 12 minutes ago [-]
Context learning means learning facts or rules without pre-training. They are two distinct phases.

An interesting question is, if pre-trained specialized models are available for a thousand or ten thousand most common tasks humans do every day, of what use a general model could be?

bryanrasmussen 35 minutes ago [-]
> But this isn’t unlike the general populous. At scale humans accept new things slowly.

right, the model works like humans at scale. Not like a human who reads the actual paper disproving the fact they thought was correct and is able to adapt. True not every human manages to do that, science advancing one death at a time, but some can.

But since the model is a statistical one, it works like humans at scale.

johnsmith1840 1 hours ago [-]
It's basically continual learning. This is beyond a hard problem it's currently an impossible one. I know of no system that solve CL even at small scale let alone large models.

Annoyingly, they have SOME inherent capability to do it. It's really easy to get sucked down this path due to that glimmer of hope but the longer you play with it the more annoying it becomes.

SSI seems to be focused on this problem directly so maybe they discover something?

joriJordan 51 minutes ago [-]
Because we don't experience reality through language but direct sensory perception. Language is arbitrary bird song and visual representations dragged forward from history, accepted definitions never uniformly distributed.

Testing based on contextual correctness makes no sense when there is no center to the universe. No "one true context to rule them all".

We learn from hands on sensory experiences. Our bodies store knowledge independent of the brain; often referred to as muscle memory.

Gabe Newell mentioned this years ago; our brain is only great at some things like language and vision processing but the rest of our body is involved in sensory information processing too: https://en.wikiquote.org/wiki/Gabe_Newell

The most potent evidence the brain is not the center of the universe we commonly think it to be is that patient with 90% of their skull filled with fluid while they carried out a typical first worlder life: https://www.sciencealert.com/a-man-who-lives-without-90-of-h...

States are banning a reading education framework that's been linked to lower literacy scores in younger generations; 3-cueing relies on establishing correctness via context assessment: https://www.edweek.org/teaching-learning/more-states-are-tak...

"Establishing context" is a euphemism for "arguing semantics".

Putting the brain at the root of of human intelligence is a relic of hierarchical and taxonomical models. There are no natural hierarchies.

cluckindan 38 minutes ago [-]
”Because we don't experience reality through language but direct sensory perception”

That statement is patently false. We know that language influences our senses to a degree where we are unable to perceive things if our language doesn’t have a word for it, and will see different things as being equal if our language uses the same word for both.

There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.

Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.

Tor3 15 minutes ago [-]
"There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.

Similarly, some use the same word for blue and white, and are unable to perceive them as different colors."

Both of the above is false. There are a ton of different colors that I happen to call "red", that does not mean that I can't perceive them as different. That I don't call them "different colors" is completely irrelevant. And unable to perceive blue and white as different colors? You must be joking. Even a hypothetical language which only used a single word for non-black items, say, "color", for everything else, would be able to perceive the difference with zero problems.

Japanese use "aoi" for a set of colors which in English would be separated into "blue" and "green". I can assure you (from personal experience) that every Japanese speaker with a fully functioning visual system is perfectly able to perceive the difference between, in this case, blue and green as we would call them.

numtel 7 minutes ago [-]
There's a Terence McKenna quote about this:

> So, for instance, you know, I’ve made this example before: a child lying in a crib and a hummingbird comes into the room and the child is ecstatic because this shimmering iridescence of movement and sound and attention, it’s just wonderful. I mean, it is an instantaneous miracle when placed against the background of the dull wallpaper of the nursery and so forth. But, then, mother or nanny or someone comes in and says, “It’s a bird, baby. Bird. Bird!” And, this takes this linguistic piece of mosaic tile, and o- places it over the miracle, and glues it down with the epoxy of syntactical momentum, and, from now on, the miracle is confined within the meaning of the word. And, by the time a child is four or five or six, there- no light shines through. They're- they have tiled over every aspect of reality with a linguistic association that blunts it, limits it, and confines it within cultural expectation.

joriJordan 33 minutes ago [-]
Only after we acquire language from sensory experience first.

It need not be language as we know it that fosters those outcomes either.

What you describe is reinforcement education which can be achieved without our language, without the word "blue" we can still see the portion of the visible light spectrum that we associate to the specific word.

Jensson 31 minutes ago [-]
> Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.

You really think they can't see clouds in the sky because they have the same word for white and blue? I think you take those studies as saying more than they said.

We do adapt our perception a little bit to fit what we need for our every day life, not for language but whats useful for us. Language matches what people need to talk about, not the other way around, if a cultures language doesn't differentiate between blue and green its because they never needed to.

XenophileJKO 44 minutes ago [-]
Hmm.. I looked at the benchmark set.

I'm conflicted. I don't know that I would necessarily want a model to pass all of these. Here is the fundamental problem. They are putting the rules and foundational context in "user" messages.

Essentially I don't think you want to train the models on full compliance to the user messages, they are essentially "untrusted" content from a system/model perspective. Or at least it is not generally "fully authoritative".

This creates a tension with the safety, truthfulness training, etc.

58 minutes ago [-]
Herring 39 minutes ago [-]
Don't always trust everything you read in papers. Researchers are under incredible pressure to publish something, anything. Wait a few years and see if the paper survives the test of time. LLMs work reasonably fine for me in new domains.
rishabhaiover 58 minutes ago [-]
wasn't in-context learning an emergent behavior a while ago (1-2 years)?
TZubiri 21 minutes ago [-]
This is quite on brand for China. I think they are experts at reverse engineering and learning 'from context' rather than by formal consumption of foreign training material.

The fictional training data with a made up country and laws was a very interesting experiment design, I can imagine that's how they approach making business with other countries. Like an alien made up system they have to learn on the spot.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:02:21 GMT+0000 (Coordinated Universal Time) with Vercel.