Demis Hassabis, CEO of Google DeepMind, is no stranger to AI’s transformative potential. When we last spoke in November 2022, just before the launch of ChatGPT, Hassabis expressed concern about the rapid pace of AI development. He warned against a reckless approach, likening some in the field to experimentalists handling dangerous material. Fast forward to 2025, and much has changed. Hassabis recently won a share of the 2024 Nobel Prize in Chemistry for his work on AlphaFold, a system that has revolutionized biomedical research. Today, many experts, including Hassabis, believe Artificial General Intelligence (AGI) could be within reach by the end of this decade.
The Promising Future of AGI
Hassabis has dedicated his career to creating AGI, which he believes could be humanity’s most beneficial invention if developed responsibly. In his view, AGI could solve many of the world’s biggest problems. He envisions a future where AI helps cure diseases, develop new energy sources, and address climate change. “If we build AI correctly,” he says, “it will help with some of our most pressing problems. It’s almost like the cavalry we need today.”
Hassabis is particularly optimistic about AI’s potential to tackle climate change. He argues that technological solutions, including AI-driven innovations like new energy sources, are the only hope for meaningful progress. He believes that without AI, collective action on global issues will fall short.
Addressing the Risks of AGI
Despite his optimism, Hassabis is acutely aware of the risks AGI poses. He believes AI’s dual-use nature—its potential for both positive and harmful applications—requires careful consideration. One of the key challenges is ensuring that AI is used for good, such as curing diseases, while preventing bad actors from repurposing the technology for harmful purposes.
The second major concern is the inherent risk of AGI itself. As AGI becomes more autonomous and self-improving, ensuring control and safety will become increasingly difficult. Hassabis emphasizes the importance of establishing strong safeguards and oversight to prevent unintended consequences.
The Worst-Case Scenario
While the potential of AGI is immense, Hassabis warns that if not managed correctly, the technology could be repurposed for destructive purposes. He fears that AI could be used to create harmful substances instead of cures, reversing the progress society hopes to make. To mitigate this, Hassabis advocates for international cooperation and global standards in AI development. He stresses that without these safeguards, the consequences could be catastrophic.
Google’s Changing Approach to Military AI
When Google acquired DeepMind in 2014, Hassabis secured a promise that DeepMind’s technology would not be used for military purposes. However, the landscape has changed. Google now sells DeepMind’s AI technology to military forces, including those of the United States and Israel.
Hassabis does not view this as a compromise on his principles. Instead, he believes the world’s increasing geopolitical tensions necessitate partnerships with governments to address emerging threats. He acknowledges the need for AI’s application in critical sectors such as cybersecurity and biosecurity, areas where DeepMind excels. While the technology’s availability has become more widespread through open-source models, DeepMind continues to focus on areas where its expertise is most needed.
Controlling AI’s Power and Preventing Misuse
As AI systems become more powerful, the concern that they might develop harmful intentions—such as power-seeking behavior—has gained attention. Hassabis believes the risks are still unknown, and while some argue that controlling AGI will be easy, he stresses that this uncertainty requires ongoing research to better understand and mitigate these risks. He emphasizes the importance of proactive work to ensure that AGI’s development is both safe and beneficial.
The Need for Global Cooperation
Hassabis highlights the urgency of international collaboration, not only between nations but also between companies and researchers. He believes that as AGI approaches, society is not yet prepared for its arrival. The rapid progress in AI requires careful planning, global standards, and shared responsibility to avoid potential pitfalls.
The Role of Scientists and Technologists in AI’s Future
When asked about his identity, Hassabis identifies himself first as a scientist. His lifelong pursuit of knowledge and understanding of the world has driven his work in AI. He sees the development of AI as a way to address fundamental questions about intelligence and consciousness, while also making a tangible impact on society. Hassabis views himself as an entrepreneur second, primarily because it is the quickest route to achieving his goals.
Looking Ahead: The Timeline for AGI
As the timeline for AGI approaches, opinions on its arrival vary. Some believe AGI could be realized as soon as within a few years. Hassabis remains cautious, stating that AGI’s development depends on the definition of what constitutes AGI. For him, AGI means systems capable of performing complex tasks on par with human cognition, such as generating groundbreaking theories like general relativity.
Despite differing timelines, Hassabis remains focused on building AI that is not just useful but also capable of advancing human knowledge. He acknowledges that the engineering and science aspects of AI are both critical for societal progress, but it’s the scientists who truly push the boundaries of what is possible.