The AI revolution is unfolding before our eyes, and there is considerable excitement about how its magical powers may change our lives, for better and worse. Stuart Russell, a professor of computer science at the University of California, Berkeley, warns us about the dangers of this technology and the need for strong safety and ethical guardrails during the ongoing development process.
As one of the leading researchers on AI, Russell’s research spans almost every area of theoretical AI. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in universities around the world. He is also one of the most influential voices on safety and ethics of AI, having founded the International Association on Safe and Ethical AI.
In the second of a two-part series, Russell speaks about the possibility of the emergence of artificial general intelligence (AGI) and the state of AI research in India. He spoke to Amitabh Sinha. You can read the first part of the interview here.
Q: When large language models like ChatGPT first appeared, there was massive excitement. However, subsequent releases, though more powerful, have felt like more of the same. What is the next big thing in the development of AI?
Russell: If I knew the answer to that, I would be much richer than I am. We have already taken one major step—maybe one and a half—since ChatGPT arrived in November 2022. ChatGPT was GPT-3.5; then GPT-4 was released in March 2023, representing a significant step forward. It was able to reason about things that clearly were not in the training data, coming up with novel formulations and solving tricky problems. That was one step.
The other half-step is what we call agentic AI. In the context of large language models, this means the system can not only print text on a screen but also take action in the real world. It can actually send an email rather than just telling you to send one, log on to a website to make a transaction, post on social media, or buy and sell stocks on the stock market.
The “next thing” that big AI companies are investing in heavily is Artificial General Intelligence (AGI)—AI systems that match or exceed human capabilities in every area. Such a system would not only be a world champion at chess, Go, and poker, but also amazing at writing scientific articles, solving problems, doing research, writing legislation, diagnosing diseases, and writing poetry or love letters. It would be, essentially, a superhuman brain.
Story continues below this ad
We often hear that we are very close to AGI. Are we?
That is a big question. The scale of investment is enormous—50 or 100 times bigger than the Manhattan Project—making it by far the biggest technology project the human race has ever undertaken.
However, I actually do not think it is going to succeed; my current estimate is about a 75 per cent chance that it fails due to current technical limitations. On the other hand, if those limitations are overcome, it becomes even more scary. Mythology in every culture contains examples of creations that we cannot control. An AI we cannot control puts it in direct conflict with us, and we would lose that conflict just as I lose when I play chess with it. I basically end up getting wiped; it wipes the floor with me on the chessboard, and it will wipe the floor with humanity in the real world.
Is the evolution toward AGI inevitable? Can we change course?
Story continues below this ad
We have certainly decided not to follow paths once thought inevitable. With cloning, we moved from single-celled animals to sheep, but we chose not to clone humans because it would be extremely harmful.
Can nuclear technology be another example?
Yes, nuclear is highly constrained. We also phased out CFCs because they were destroying the ozone layer. However, the upside value of AGI is immense; a “back of the envelope” calculation suggests it could deliver a tenfold increase in global GDP. This potential acts as a “very, very powerful magnet” pulling us into the future. Regarding inevitability, if the current push fails, I think there will be a “mini ice age” in AI research for another decade.
The idea that AGI will eventually develop is based on the premise that human brains essentially work like calculating machines. Has confidence in that premise changed over the decades?
In the five decades I have worked in AI, I have developed a much greater appreciation for the human mind, but I have seen nothing to convince me we must look elsewhere. In neuroscience, nothing disconfirms the theory that what matters are the neurons and the signals passing between them. As we piece together the story of functions like hearing, vision, and motor control, it remains consistent with the century-old view of signals and neurons.
Story continues below this ad
It would be a huge shock to the scientific and mathematical establishment if something entirely different were happening in the brain. It would be absurd to think there is no arrangement of atoms in the universe that can do a better job than the human brain; that would be ridiculously arrogant to assert.
What is your assessment of the AI research scene in India? Does the country need to develop its own Large Language Models (LLMs)?
There are several considerations. One is data bias; training data is often appropriate for specific markets, like OECD countries, and mostly comes from Westerners writing in English. While there is a fear of falling behind without a proprietary giant model, it is less clear that you need your own for business.
Then there is economic significance. Many fear falling behind without a proprietary giant model, but it is less clear that you need your own model for business applications.
Story continues below this ad
I would rather focus on training people in the universal, foundational, and permanent skills: data analysis, statistics, the mathematics of machine learning, and the mathematics of reasoning and decision-making.
When Indian officials seek you out for your advice on how to take this idea forward, what do you advise?
My understanding is that they are not committed to the path of trying to create AGI. They are more interested in creating specific, narrow-application systems that deliver value in healthcare, education, engineering, and construction, which I think makes sense. It is also not clear that you need to develop your own large language model capabilities.
I think the most valuable contribution that AI has made in recent years to the world is a system called AlphaFold, a remarkable achievement that won the Nobel Prize in Chemistry last year. AlphaFold is a system that predicts the structure of a protein from its molecular sequence, and it’s incredibly valuable for all kinds of biology, pharmacology, biotechnology.
Story continues below this ad
Why haven’t we built more products like AlphaFold when AI makes it possible?
Because we are spending trillions of dollars on LLMs. LLMs are “fancier”, but practicable and useful systems like AlphaFold—which won the Nobel Prize in Chemistry for predicting protein structures—do not get the same priority.
Similarly, machine learning has improved weather forecasting using mathematically sophisticated technology that has nothing to do with LLMs. We need more things like this; you cannot simply throw weather data at a large language model and hope for the best.
Would it be better for India to focus on these utility-based products and “leapfrog” the technology by integrating safety and ethical standards?
Story continues below this ad
Absolutely; I believe that is the right strategy. In the long run, there are different possible futures. In those where human beings still exist, there is either safe AI or no AI; there are no futures with humans and unsafe, superintelligent AI. Investing in safe AI is the only logical path because, ultimately, people will not use AI that is not safe to use.














