Community Perspective – Stuart Russell

Q&A with Stuart Russell, AI2050 Senior Fellow

Stuart Russell is the Professor of Computer Science and Director of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. Dr. Russell’s textbook Artificial Intelligence: A Modern Approach (co-authored with Peter Norvig), now in its fourth edition, has been translated into 14 languages and is used in over 1,500 universities in 135 countries. His most recent book is Human Compatible: AI and the Problem of Control, which has been translated into German, Chinese, Ukrainian, Russian, Japanese, Korean, Turkish, Greek, Portuguese and Croatian. 

He received the AI2050 Senior Fellowship in 2022. Stuart’s work with Berkeley colleagues Sanjit Seshia and Alvin Cheung will address Hard Problem 1 (develop more capable and more general AI, that is safe and earns public trust). In his words, “Our core hypothesis is that general-purpose AI systems need to know things and be able to reason with that knowledge.” Central to this project is probabilistic programming, which Stuart regards as the best tool to address real-world problems, as it is well equipped to cope with complex events and partial, ambiguous knowledge about the world.

 

Learn more about Stuart Russell:


What are you doing for your AI2050 Senior Fellowship?
Stuart Russell, AI2050 Senior Fellow

I am exploring an approach to the creation of well-founded, interpretable, and provably safe general-purpose AI, in contrast to the black-box deep learning approach prevalent in today’s AI. “Well-founded” means that we can understand in a rigorous way how the AI system works, what it knows, what it wants, and what it will do.


What do you see as the potential lasting impact of your work?
Stuart Russell, AI2050 Senior Fellow

The goal is to show that this approach to AI is both powerful and safe, and to redirect efforts away from the gargantuan and completely mysterious systems that the big tech firms are currently building. If successful, humanity will be able to reap the benefits of general-purpose AI while retaining absolute confidence that the AI systems will continue to act in the best interests of humans.


Can you point me at any videos or articles that might be understandable for a person who doesn't have background in computer science or AI?
Stuart Russell, AI2050 Senior Fellow

Some of the general questions of AI safety are covered in my 2021 Reith Lectures, which are intended for a general audience. The basic ideas of probabilistic programming are unavoidably technical, but there is a reasonable non-technical summary in my 2015 article “Unifying Logic and Probability” and this accompanying video.


You're one of the people behind the “Slaughterbots” video, which became widely popular and cited. Can you tell me how it came about? What was the impact?
Stuart Russell, AI2050 Senior Fellow

I began working to curtail the threat of lethal autonomous weapons [which can attack humans without any human supervision] in 2013. I wrote articles, gave talks (including one at CCW, the UN Convention on Certain Conventional Weapons in Geneva, which is where many arms control treaties are hammered out), helped create an open letter signed by tens of thousands of scientists, and led a delegation of scientists to the White House. The main point, which we have reiterated many times, is that lethal autonomous weapons will become weapons of mass destruction. One person can press a button and launch millions of lethal weapons that will hunt down and kill millions of people.

Then in 2016 I went to a meeting at West Point where [a senior US defense official] said, “We’ve listened carefully to these arguments and objections to autonomous weapons, and my experts have assured me that there is no risk of accidentally creating Skynet [the hostile artificial general intelligence imagined in the Terminator films].” He was deadly serious.

So, clearly, all these articles and powerpoints were not getting the message through. By chance, I had just watched a short and very effective film on video game addiction called “Uncanny Valley.” So I decided we needed something similar for lethal autonomous weapons: a fictional and highly watchable account that would convey the main ideas in a way no one could misunderstand.

I wrote a short treatment that was really not very good, and then found some brilliant writers and filmmakers at Space Digital in Manchester, England, who made it much, much better. It had two storylines: one, a sales pitch by the CEO of an arms manufacturer, demonstrating the tiny quadcopter and its use in targeted mass attacks; the other, a series of unattributed atrocities including the assassination of hundreds of students at the University of Edinburgh.

The film premiered at the CCW in November 2017. The reactions elsewhere were mostly positive: the film had about 75 million views on the web and I’m pleased to say that CNN called it “the most nightmarish, dystopian film of 2017.”

Many of my AI colleagues thought the CEO’s presentation was real, not fictional, which tells you something about where the technology is.


If you could ask an AI from the year 2050 one question and get a response today, in 2023, what would that question be?
Stuart Russell, AI2050 Senior Fellow

“What is the precise nature of harmonious coexistence between humanity and powerful AI systems in 2050, and what were the steps taken to get there?”