In a major step toward building AI that continually learns and improves itself, Google researchers said that they have developed a new machine learning model with a self-modifying architecture. Called ‘HOPE’, the new model is said to be better at long-context memory management than existing state-of-the-art AI models.
It is meant to serve as a proof-of-concept for a novel approach known as ‘nested learning’ devised by Google researchers, where a single model is treated as a “system of interconnected, multi-level learning problems that are optimised simultaneously” instead of one continuous process, the search giant said in a blog post on Saturday, November 8.
Google said that the new concept of ‘nested learning’ could help solve for limitations in modern large language models (LLMs) such as continual learning, which is a crucial stepping stone on the path to artificial general intelligence (AGI) or human-like intelligence.
Last month, Andrej Karpathy, a widely respected AI/ML research scientist who formerly worked at Google DeepMind, said that AGI was still a decade away primarily because no one has been able to develop an AI system that learns continually — at least so far. “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues,” Karpathy said in an appearance on a podcast.
“We believe the Nested Learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain,” Google said. The findings of the researchers were published in a paper titled ‘Nested Learning: The Illusion of Deep Learning Architectures’ at NeurIPS 2025.
ALSO READ: EU Data Act Goes Live—Why Today Marks a Turning Point for Enterprise Strategy
