NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Nested Learning: A new ML paradigm for continual learning (research.google)
heavymemory 5 minutes ago [-]
The idea is interesting, but I still don’t understand how this is supposed to solve continual learning in practice.

You’ve got a frozen transformer and a second module still trained with SGD, so how exactly does that solve forgetting instead of just relocating it?

Bombthecat 21 minutes ago [-]
Damn, and before that, Titan from Google: https://research.google/blog/titans-miras-helping-ai-have-lo...

We are not at the end of AI :)

Also, someone claimed that NVIDA combined diffusion and autoregression, making it 6 times faster, but couldn't find a source. Big if true!

heavymemory 8 minutes ago [-]
Do you have a source for the NVIDIA “diffusion plus autoregression 6x faster” claim? I can’t find anything credible on that.
abracos 15 hours ago [-]
Someone's trying to reproduce it in open https://github.com/kmccleary3301/nested_learning
NitpickLawyer 2 hours ago [-]
Surprised this isn't by lucidrains, they usually have the first repro attempts.

This tidbit from a discussion on that repo sounds really interesting:

> You can load a pretrained transformer backbone, freeze it, and train only the HOPE/TITAN/CMS memory pathways.

In principle, you would:

- Freeze the shared transformer spine (embeddings, attention/MLP blocks, layer norms, lm_head) and keep lm_head.weight tied to embed.weight.

- Train only the HOPE/TITAN memory modules (TITAN level, CMS levels, self-modifier projections, inner-optimizer state).

- Treat this like an adapter-style continual-learning finetune: base model provides stable representations; HOPE/CMS learn to adapt/test-time-learn on top.

----

Pretty cool if this works. I'm hopeful more research will go into reusing already trained models (other than freeze existing parts, train the rest) so all that training effort doesn't get lost. Something that can re-use that w/ architecture enhancements will be truly revolutionary.

aktuel 2 hours ago [-]
There is also a related youtube video online: Ali Behrouz of Google Research explaining his poster paper entitled "Nested Learning: The Illusion of Deep Learning Architecture" at NeurIPS 2025. https://www.youtube.com/watch?v=uX12aCdni9Q
panarchy 13 hours ago [-]
I've been waiting for someone to make this since about 2019 it seemed pretty self-evident. It will be interesting when they get to mixed heterogeneous architecture networks with a meta network that optimizes for specific tasks.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 12:07:50 GMT+0000 (Coordinated Universal Time) with Vercel.