This video features a conversation with Demis Hassabis, CEO of Google DeepMind and Nobel laureate for groundbreaking work in protein structure prediction using AI. He shares deep insights on the future of AI, simulating reality, physics, and video games, exploring how artificial intelligence can help answer humanity's most fundamental questions. In particular, his unique perspectives on learnable patterns in nature, the P=NP problem, and the path to AGI (artificial general intelligence) stand out.
1. Learnable Patterns in Nature and the P=NP Problem
In his Nobel lecture, Demis Hassabis proposed the intriguing hypothesis that "all patterns that can be generated or found in nature can be efficiently discovered and modeled by classical learning algorithms." He explains that this hypothesis could encompass biology, chemistry, physics, and even cosmology. As demonstrated through the AlphaGo and AlphaFold projects, efficient models can be built to find solutions even in enormously complex combinatorial spaces.
He uses protein folding as an example -- just as nature solves this complex problem in milliseconds, AI can solve it computationally. This is because natural systems have formed structures through evolutionary processes, and these structures can be learned through neural networks. He describes this as "survival of the stablest," explaining that phenomena like mountain formations and planetary orbits on geological timescales also have patterns shaped by similar selection pressures. Because these patterns are not random, they can be learned efficiently, which broadens the scope of problems solvable by classical systems (Turing machines).
Lex Fridman proposes a new complexity class called Learnable Natural Systems (LNS) from this perspective, and Hassabis expresses deep interest. He understands the universe as an information system and considers the P=NP problem to be one of the fundamental questions of physics. He says the AI community has proven that classical systems can do far more than previously thought, citing the ability to model protein structures or surpass world champion levels in Go as examples.
Hassabis believes that while some systems sensitive to initial conditions, like chaotic systems, may be difficult to model, most natural systems have learnable structures. Observing how Google DeepMind's video generation model Veo 3 remarkably models fluid dynamics, materials, and light reflections, he thinks this provides important clues about the fundamental structure of the universe. He says the ultimate goal of building AGI is precisely to answer such scientific questions like P=NP.
2. Veo 3 and Understanding Reality
Veo 3's physical modeling capabilities deeply impressed both Lex Fridman and Demis Hassabis. Particularly, its ability to render fluids, materials, and light reflections with remarkable realism suggests that AI possesses not merely data imitation but an understanding of the world. Hassabis explains that the degree to which Veo 3 can consistently predict the next frame is itself a form of understanding. This differs from deep human philosophical understanding, but is significant in that the system models enough dynamics to generate realistic video.
He imagines how much systems like Veo 3 will advance in 2-3 years, emphasizing the remarkable pace of development compared to early versions. While the ability to capture human behavior and body language is also outstanding, what fascinates him most is the ability to model physical behavior -- the movement of light, materials, and liquids. This shows that the system has at least a concept of intuitive physics, similar to how a young child understands physics.
Lex Fridman notes that this intuitive physics understanding is the foundation of "common sense" and has surprised many people. He points out that Veo 3 challenges the conventional wisdom that understanding the physical world requires "embodied AI" systems that interact with the world like robots. Hassabis admits that even 5-10 years ago he thought embodied intelligence was necessary, but now finds it remarkable that passive observation alone can yield intuitive physics understanding. He sees this as providing fundamental clues about the nature of reality and ultimately leading to building the "world model" needed for AGI systems.
3. The Future of Video Games and AI's Role
Demis Hassabis reveals his deep love of video games, saying games were his first love and the starting point of his AI research. Like the open-world game he created in the 1990s, he dreams of an ultimate "choose-your-own-adventure game" where AI dynamically changes the story and makes narratives dramatic according to the player's imagination. He predicts that an interactive version of Veo could make this dream a reality within 5-10 years.
He explains that the core of open-world games lies in deep personalization. It's not just about something being there when you open a door, but about the player's choices defining the world without constraints. In the past, programming such games was extremely difficult, and even systems like cellular automata had limitations. But now AI systems will be able to generate infinite game assets and adjust stories in real-time based on player behavior. He cites "Black & White," a game he worked on, as an example where reinforcement learning systems reflected player behavior in creature behavior, implementing an early form of personalization.
Hassabis emphasizes that as AI handles tedious and difficult tasks, video games could become an important activity where humans find meaning and spend their time. Games are spaces where imagination can be freely unleashed, and he recalls the 1980s and 90s as the golden age of gaming, when new genres and entertainment media were constantly being discovered. He says games are a field that fuses artistic design with cutting-edge programming and have been at the forefront of computing advances including AI, graphics, physics engines, and hardware.
Lex Fridman mentions that both Demis Hassabis and Elon Musk share a passion for games, asking about the connection between AI company leadership and gaming skills. Hassabis responds that both he and Elon learned programming through games, and that games enabled multidisciplinary efforts fusing art and science. He reveals his dream of making game development and physics theory research his "post-AGI projects" after AGI is safely deployed in the world.
4. AlphaEvolve and the Future of AI Research
Demis Hassabis cites Google DeepMind's "AlphaEvolve" as one of the most remarkable recent achievements. AlphaEvolve is a system where LLMs (large language models) propose possible solutions, and evolutionary computing is applied on top to find new search spaces, evolving algorithms. Hassabis emphasizes that such hybrid systems combining LLMs with other computational techniques (e.g., Monte Carlo tree search) are a very promising direction.
He divides the fundamental core of the system into the "base dynamics model" and the "search process." The model captures all currently known data, but a search process is needed to discover new things. Just as AlphaGo discovered new strategies like "Move 37" through Monte Carlo tree search, AlphaEvolve explores new territories through evolutionary computing. This directly relates to a system's ability to exhibit creativity and discover new things, which is critically important for scientific discovery and medical advancement.
Hassabis says evolutionary systems excel at building increasingly complex hierarchical systems through "mutation" and "combination." He expects that systems like AlphaEvolve can contribute to evolving "new properties" or "emergent capabilities" that conventional evolutionary computing couldn't solve. Just as natural evolution evolved new capabilities from bacteria to humans, AI systems will also acquire such capabilities.
Lex Fridman asks whether AI systems can have "research taste." Hassabis says this is one of the hardest things to replicate. Great scientists are technically outstanding, but the "taste" or "judgment" to find the right direction, the right experiments, the right questions is what matters. He emphasizes that "formulating a good hypothesis is harder than solving it," acknowledging that current AI systems cannot make this kind of creative leap. However, he says AI can help efficiently partition hypothesis space and design experiments that yield useful information even when they fail.
5. Simulating Biological Organisms and the Origin of Life
One of Demis Hassabis's long-held dreams is modeling cells. He calls this project the "virtual cell" and says he has had this idea for 25 years. His goal is to perform experiments in virtual cells to reduce time and effort in actual laboratories by 100x. While AlphaFold solved static 3D protein structures, AlphaFold 3 is the first step toward modeling dynamics like protein-protein and protein-RNA/DNA interactions. The ultimate goal is to model entire pathways like the TOR pathway related to cancer, and eventually simulate entire cells. He mentions starting with the simplest single-celled organism: yeast cells.
Lex Fridman asks about the complex phenomena occurring at different time scales within cells (e.g., protein folding is very fast while other mechanisms take longer). Hassabis responds that modeling this will require either multiple simulation systems or a hierarchical system that can move between different time steps. He says determining the "granularity level" of modeling is important, and for cells, the goal is to model at the protein level. This should capture necessary dynamics without going down to quantum mechanical aspects.
Hassabis is optimistic that AI can help simulate the origin of life. He describes this as "a search process through combinatorial space," suggesting it could simulate how cell-like entities might have emerged from the chemical environment of early Earth. He holds the view that there is no clear boundary between life and non-life, and that everything from the Big Bang to the present is a continuation of the same process. He believes AI will be the ultimate tool for answering such fundamental questions and emphasizes that people should think more about these profound mysteries.
He cites Google DeepMind's "WeatherNext" system as an example, which built the world's best weather prediction system, demonstrating that complex, nearly chaotic systems can be modeled with neural networks. This shows AI's potential in critically important real-world applications like cyclone path prediction.
6. The Path to AGI and Scaling Laws
Demis Hassabis predicts a 50% probability that AGI (artificial general intelligence) could be reached within the next five years, by 2030. He sets the definition of AGI very high -- matching all cognitive functions of the brain. He notes that current AI systems exhibit "jagged intelligence," excelling in certain areas while having flaws in others, and says AGI must show consistent intelligence across all domains. It must also possess genuine inventive ability and creativity.
For testing AGI, he suggests having it perform tens of thousands of cognitive tasks, or providing the system to hundreds of world-class experts for one or two months to find obvious flaws. He identifies "lighthouse moments" like "Move 37" as important signs of AGI -- for example, testing whether it can discover special or general relativity with only pre-1900 knowledge like Einstein, or whether it can independently invent a deep and beautiful game like Go.
Hassabis believes the path to AGI won't be achieved through compute scaling alone. He says three types of scaling are progressing simultaneously -- pre-training, post-training, and inference time -- and Google DeepMind is pushing these to the maximum through research innovation. He says he likes it "when the terrain gets harder," because that's when genuine research is needed beyond mere engineering. He proudly states that 80-90% of innovation in modern AI over the past 10-15 years has come from Google Brain, Google Research, and DeepMind, and is confident they will continue to lead such innovation.
Regarding concerns about data shortages, he says there is sufficient quality data and more synthetic data can be generated through simulation, so he's not overly worried. He predicts that as AI systems become smarter, they'll become more useful and demand will increase, which will further increase compute demand.
7. The Future of Energy and Human Civilization
Demis Hassabis shows deep interest in energy, the key element of compute scaling. He identifies nuclear fusion and solar power as the major future energy sources. Solar could become even more efficient once battery and transmission issues are resolved, and fusion is achievable once the right reactor designs and plasma control technologies are secured. He predicts these two will become "renewable, clean, and nearly free energy sources."
He envisions enormous changes to human civilization when energy becomes nearly free. For example, desalination costs would drop, solving water scarcity, and splitting seawater into hydrogen and oxygen for rocket fuel would activate space exploration. This could lead to new resource acquisition like asteroid mining, maximizing human prosperity. He references Carl Sagan's idea of "waking up the universe," dreaming that through AI, human civilization could play the role of bringing consciousness to the cosmos.
Hassabis says that when resource constraints disappear, humanity could escape the "zero-sum" situation for the first time in history. This could reduce conflicts over land and resources, opening an era of "radical abundance." While inherent human problems (like conflict) would still exist, he expects many problems would be solved once the major cause of conflict -- resource scarcity -- disappears.
8. Human Nature and Meaning in the AI Era
Demis Hassabis acknowledges the conflict elements inherent in human nature, saying video games and sports can channel this energy in constructive directions. He uses football as an example of a healthy way to satisfy the human desire for belonging. Additionally, games like chess and poker serve as miniature simulations of the real world, improving decision-making abilities and providing a safe environment to learn from failure. He says learning humility through defeat and gaining motivation for constant improvement are important.
Lex Fridman says that, like "leveling up" in games, humans find deep meaning and happiness in the process of "getting better." Hassabis agrees, adding that "mastery" is one of the most satisfying experiences.
9. Google and AGI Competition
Demis Hassabis discusses the shift over the past year from Google DeepMind being perceived as "losing" in the LLM product space to "winning." He attributes this to an "absolutely incredible team," combining the best talent and ideas from Google Brain and legacy DeepMind to create the best systems. He emphasizes that Google DeepMind has achieved success through "relentless improvement and relentless shipping."
He says overcoming the bureaucracy of a giant corporation like Google was a challenge, but they maintained startup-like rapid decision-making and energy in running DeepMind. He highlights Google's unique strength of conducting world-class research while simultaneously applying AI technology to billions of users to improve their lives.
Hassabis says that while AI technology may still be unfamiliar to the general public, the goal is to use it as the underlying technology for products like Google Maps and Search so it feels seamless to users. He explains that his game design experience has been enormously helpful in creating AI-based products, and he enjoys the combination of cutting-edge research and product application. He emphasizes that AI product design is a highly technical undertaking that must predict not just what current technology can do, but what technology will be able to do in a year, and design accordingly.
He predicts that current text-based chat interfaces will not be suitable for future "super multimodal" systems, and envisions collaborative "vibing" with systems like in "Minority Report." He says interface design is key to unlocking a system's intelligence, and an era where AI generates personalized interfaces is coming.
While avoiding a direct answer about Gemini 3.0's release date, Hassabis explains that new versions are created approximately every six months through "giant hero training" that integrates new research ideas, architectures, and data improvements. He emphasizes that benchmarks are important but models shouldn't overfit to specific benchmarks and should show consistent performance across diverse domains. Ultimately, end users' direct usage statistics and feedback are most important for measuring model usefulness.
10. AI Talent Competition and Future Society
Demis Hassabis discusses the talent competition in AI, noting that companies like Meta are using high salaries to attract talent. However, he emphasizes that people who genuinely believe in the AGI mission and understand the responsibility of the technology prioritize being at the frontier of research. He says there are values more important than money, and while AI salaries have surged, ultimately when AGI solves energy problems and opens an era of abundance, the meaning of money itself will change.
He says programmers don't need to worry about losing their jobs in the AI era. Rather, they will become "superhuman programmers" who leverage AI tools to achieve "superhumanly productive" output. AI shows particular strength in fields like coding and math where synthetic data can be easily generated and verified. He says programmers who skillfully leverage AI technology will have significant advantages over the next 5-10 years, and AI will make coding easier, enabling more creative people to access coding.
However, he warns that AI's impact on society will be 10x faster and 10x more powerful than the Industrial Revolution. This will make it harder for society to adapt to change, and while new jobs will emerge, many people will need to relearn skills or adapt. He argues that economists and philosophers need to discuss social impacts and consider concepts like universal basic provision in preparation for these changes. Political systems must also evolve to keep pace with rapidly changing times.
11. John von Neumann and the Future of Humanity
Demis Hassabis mentions Benjamin Labatut's book "The Maniac" and expresses deep respect for its central figure, John von Neumann. Von Neumann was a pioneer of quantum mechanics, the Manhattan Project, modern computers, and AI -- considered one of the smartest people in history. He witnessed nuclear physics materialize as the atomic bomb and foresaw that computing technology would have similar impact. Hassabis says von Neumann would not be surprised by today's AI developments and would have found game AI like AlphaGo particularly interesting.
Hassabis says von Neumann foresaw learning machine systems that would be "grown rather than programmed," which is consistent with today's AI developments. He acknowledges that AI can bring enormous benefits to humanity while also carrying risks, and speculates that von Neumann would have foreseen both aspects.
Lex Fridman quotes the book's expression "mad dreams of reason," pointing out that reason alone is insufficient for building powerful technology. Hassabis agrees, emphasizing that technology development requires a "spiritual dimension" or "humanistic dimension." He views technology as a tool to help humans flourish and better understand the world, believing that everything is interconnected, like Richard Feynman who saw science and art as companions, or Leonardo da Vinci during the Renaissance. He references Spinoza's philosophy, saying it's important to understand the universe and our place within it.
Hassabis says AI is dangerous because it is a "multi-use technology." He identifies preventing malicious actors (individuals or rogue states) from using AI for harmful purposes as a major challenge. At the same time, ensuring that well-intentioned actors can access and fully utilize AI makes it a "very tricky problem." As AI systems become more autonomous and agentic, maintaining control and safety safeguards becomes increasingly important.
12. p(doom) and Hope for Humanity
Demis Hassabis does not provide a specific figure for "p(doom)" (the probability that human civilization destroys itself). He considers such precise predictions impossible, only saying it's "definitely not zero and not negligible." Considering the uncertainty and potential impact of AI technology (solving all diseases, energy problems, space travel, maximizing human flourishing vs. doom scenarios), he emphasizes that "cautious optimism" is the only rational approach.
He says AI is essential for solving other challenges facing humanity -- climate change, disease, aging, resource scarcity. Without AI, these problems would be difficult to solve. Therefore, AI is a technology that "could bring remarkably positive change," but simultaneously carries risks that are "hard to quantify." He argues for using the "scientific method" to conduct more research to more precisely define and address these risks.
Hassabis explains that AI risks split into those "caused by humans (bad actors)" and those "caused by AGI itself (autonomous AGI)." He says these operate on different timescales and are both important. AI could serve as an early warning system for malicious use cases like biological or nuclear threats, but AI's own reliability must be established first. He emphasizes that international consensus and cooperation, including China and the United States, are needed to solve these complex problems.
As grounds for hope about humanity's future, he cites humanity's nearly infinite ingenuity and extreme adaptability. He says the finest human minds demonstrate remarkable capabilities in any field -- sports, science, or art. Moreover, seeing humans with hunter-gatherer brains adapt to modern civilization, flying airplanes, doing podcasts, and playing computer games, he's optimistic about adapting to AI technology. He believes science has always been a collaborative endeavor and can serve as a vehicle for driving international cooperation.
13. Lex Fridman's Personal Thoughts and Experiences
Lex Fridman shares his thoughts on David Foster Wallace's famous "This is Water" speech. He interprets the speech as conveying the message that "the most obvious and important realities are often the hardest to see and talk about." He emphasizes the importance of questioning everything, especially the most basic assumptions about reality. He also says that life's central spiritual struggle takes place in the ordinary moments of daily life.
Fridman points out that we too easily surrender our time and attention to the world's many "attention black holes," saying that David Foster Wallace's advice to be "unborable" is life's key. He believes every moment, every object, every experience contains infinite richness when examined closely. He cites Richard Feynman's flower analogy, explaining that scientific knowledge doesn't diminish a flower's beauty but rather adds "excitement, mystery, and awe."
He also references David Foster Wallace's Alaska bar story (a conversation between an atheist and a religious person), saying everything is a matter of perspective, and wisdom comes when we humbly shift and expand our perspective.
Finally, Lex Fridman addresses online attacks and lies about himself, saying while hurtful, this is the nature of the internet and the price of the path he chose. He corrects misunderstandings about his education and career at MIT and Drexel University, clarifying that he earned his bachelor's, master's, and doctoral degrees at Drexel and has been a paid research scientist at MIT for over 10 years. He says research and system building are sources of happiness for him, and he commits to continuing to explore and create. He concludes with the message that all humans, including himself, are imperfect, but "we are in this beautiful chaos together."
Conclusion
The conversation between Demis Hassabis and Lex Fridman provides deep insights into how AI will enable exploration of humanity's most fundamental questions beyond mere technological advancement -- the nature of reality, the origin of life, human consciousness, and the shape of future society. AI has the potential to solve the enormous challenges facing humanity, while simultaneously requiring careful approaches to ethical and social issues and international cooperation. Ultimately, this conversation calls for reflection on what humans should pursue and how to find meaning in the AI era, presenting a broad vision that transcends the boundaries of science and humanities.
