Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science; Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Deputy Dean of Research for Schwarzman College of Computing at MIT. Rus’ research interests are in robotics, artificial intelligence, and data science.
The focus of her work is developing the science and engineering of autonomy, toward the long-term objective of enabling a future with machines pervasively integrated into the fabric of life, supporting people with cognitive and physical tasks. Her research addresses some of the gaps between where robots are today and the promise of pervasive robots: increasing the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments, developing intuitive interfaces between robots and people, and creating the tools for designing and fabricating new robots quickly and efficiently. The applications of this work are broad and include transportation, manufacturing, agriculture, construction, monitoring the environment, underwater exploration, smart cities, medicine, and in-home tasks such as cooking.
Rus serves as Director of the Toyota-CSAIL Joint Research Center, whose focus is the advancement of AI research and its applications to intelligent vehicles. She is a MITRE senior visiting fellow, serves as a USA expert member for GPAI (Global Partnerships in AI), a member of the board of advisers for Scientific American, a member of the Defense Innovation Board, and a member of several other boards of technical companies.
Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering and the American Academy of Arts and Sciences. She is the recipient of the 2017 Engelberger Robotics Award from the Robotics Industries Association. She earned her PhD in Computer Science from Cornell University.
AI2050 Project
Despite performing remarkable representation learning, today’s deep learning technologies are growing uncontrollably in size while leaving us with fundamental sociotechnical challenges such as causality, interpretability, fairness, accountability, and out-of-distribution generalizability. Daniela’s project will rethink modern AI’s algorithmic design choices and develop a class of models inspired by neuroscience – liquid neural networks. If successful, this approach will contribute models that are performant, causal, compact, and understandable and can give rise to goal-oriented adaptive behavior for autonomous agents of our future society.
Project Artifacts
W. Xiao, T.S. Wang, D. Rus. ABNet: Attention BarrierNet for Safe and Scalable Robot Learning. arXiv. 2024.
T. K. Rusch, N. Kirk, M. M. Bronstein, C. Lemieux, D. Rus. Message-Passing Monte Carlo: Generating low-discrepancy point sets via Graph Neural Networks. arXiv. 2024.
AI2050 Community Perspective — Daniela Rus (2024)
T. Wang, W. Xiao, T. Seyde, R. Hasani, and D. Rus. Measuring interpretability of neural policies of robots with disentangled representation. CoRL. 2023.
W. Xiao, R. Allen, and D. Rus. Safe neural control for non-affine control systems with differentiable control barrier functions. arXiv. 2023.
A. Maalouf, Y. Gurfinkel, B. Diker, O. Gal, D. Rus, and D. Feldman. Deep learning on home drone: searching for the optimal architecture. IEEE ICRA. 2023.
M. Lechner, D. Žikelić, K. Chatterjee, T. Henzinger, and D. Rus. Quantization-aware interval bound propagation for training certifiably robust quantized neural networks. AAAI. 2023.
W. Xiao, T. Wang, C. Gan, and D. Rus. SafeDiffuser: safe planning with diffusion probabilistic models. arXiv. 2023.
W. Xiao, T. Wang, R. Hasani, M. Lechner, Y. Ban, C. Gan, and D. Rus. On the forward invariance of neural ODEs. ICML. 2023.
A. Maalouf, M. Tukan, V. Braverman, and D. Rus. AutoCoreset: an automatic practical coreset construction framework. arXiv. 2023.
M. Chahine, R. Hasani, P. Kao, A. Ray, R. Shubert, M. Lechner, A. Amini, and D. Rus. Robust flight navigation out of distribution with liquid neural networks. Science Robotics. 2023.
N. Loo, R. Hasani, M. Lechner, and D. Rus. Dataset distillation with convexified implicit gradients. arXiv. 2023.
N. Loo, R. Hasani, M. Lechner, and D. Rus. Dataset distillation fixes dataset reconstruction attacks. arXiv. 2023.
M. Lechner, A. Amini, D. Rus, and T. Henzinger. Revisiting the adversarial robustness-accuracy tradeoff in robot learning. IEEE Robotics and Automation Letters. 2023.
R. Hasani, M. Lechner, A. Amini, L. Liebenwein, A. Ray, M. Tschaikowski, G. Teschl, and D. Rus. Closed-form continuous-time neural networks. Nature Machine Intelligence. 2022.