Stuart Russell 2022 Senior Fellow
Hard Problem Develop more capable and more general AI, that is useful, safe and earns public trust

Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum’s Council on AI and Robotics.

Russell is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association and the International Society for Bayesian Analysis, the ACM Karlstrom Outstanding Educator Award, and the AAAI/EAAI Outstanding Educator Award. In 1998, he gave the Forsythe Memorial Lectures at Stanford University and from 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science.

His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations. His books include “The Use of Knowledge in Analogy and Induction”, “Do the Right Thing: Studies in Limited Rationality” (with Eric Wefald), and “Artificial Intelligence: A Modern Approach” (with Peter Norvig). His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

AI2050 Project

General-purpose AI systems need to know things and be able to reason with that knowledge. For example, an AI system must know physics to invent the Laser Interferometer Gravitational-Wave Observatory and that knowledge comes from processes other than designing or seeing billions of laser interferometer gravitational-wave observatories. In Stuart’s project, he will explore probabilistic programming as a route to the creation of well-founded, interpretable, and provably safe general-purpose AI. His work will combine expressive probabilistic programming languages, a rigorous theory of component-based intelligent agent design, and formal verification methods to develop an approach that is an interpretable, safer, and more controllable alternative bet to modern deep learning.

Project Artifacts

E. Jenner, S. Kapur, V. Georgiev, C. Allen, S. Emmons, S. Russell. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network. arXiv. 2024.

S. Kapur, E. Jenner, S. Russell. Diffusion On Syntax Trees For Program SynthesisarXiv. 2024.

N. Lauffer, A. Shah, M. Carroll, M. Dennis, and S. Russell. Who needs to know? minimal knowledge for optimal coordination. ICML. 2023.

C. Laidlaw, S. Russell, and A. Dragan. Bridging RL theory and practice with the effective horizon. NeurIPS. 2023.

A. Lew, G. Matheos, T. Zhi-Xuan, M. Ghavamizadeh, N. Gothoskar, S. Russell, and V.K. Mansinghka. SMCP3: sequential monte carlo with probabilistic program proposals. AISTATS. 2023.

AI2050 Community Perspective — Stuart Russell (2023)