Anima Anandkumar saw the potential for AI to revolutionize scientific domains and focused her research on this area. Her AI algorithms enable and accelerate a wide range of scientific applications, including weather forecasting, autonomous drone flight, and drug design. To enable these applications, she proposed neural operators which learn in function spaces and can simulate complex multi-scale processes, such as fluid dynamics and material properties, orders of magnitude faster. She also did seminal work on tensor methods for unsupervised learning of latent-variable probabilistic models to capture structures in text and social networks.
Anima is a fellow of IEEE and ACM, and is part of the World Economic Forum’s Expert Network. She has received several awards, including the Guggenheim and Alfred P. Sloan fellowships, the NSF Career award, and best paper awards at venues such as Neural Information Processing and the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research.
She received her B. Tech from the Indian Institute of Technology Madras and her Ph.D. from Cornell University and did her postdoctoral research at MIT. She was principal scientist at Amazon Web Services, and is now senior director of AI research at NVIDIA, and Bren named professor at Caltech.
AI2050 Project
In her AI2050 project, Anandkumar will utilize a principled AI approach for modeling multi-scale processes in a wide range of scientific domains, e.g., fluid dynamics, wave propagation, and material properties. Her recent framework, termed neural operators, learns mappings between function spaces. She will tackle the following outstanding challenges: (1) building the foundations for a cross-domain model that can simulate complex multi-physics systems through hierarchical meta-learning approaches, (2) developing uncertainty-aware neural operators that are calibrated for risk assessment needed in many systems such as extreme-weather prediction, (3) making neural operators hardware-efficient for sustainable and scalable deployments in hybrid AI-HPC systems.
Project Artifacts
F. Shah, T.L. Patti, J. Berner, B. Tolooshams, J. Kossaifi, A. Anandkumar. Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems. arXiv. 2024.
C. Wang, J. Berner, Z. Li, D. Zhou, Jj. Wang, J. Bae, A. Anandkumar. Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators. arXiv. 2024.
B. Zhang, W. Chu, J. Berner, C. Meng, A. Anandkumar, Y. Song. Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing. arXiv. 2024.
H.C. Nam, J. Berner, A. Anandkumar. Solving Poisson Equations using Neural Walk-on-Spheres. arXiv. 2024.
K. Azizzadenesheli, N. Kovachki, Z. Li, M. Liu-Schiaffini, J. Kossaifi, and A. Anandkumar. Neural operators for accelerating scientific simulations and design. Nature Reviews Physics. 2024.
M. Liu-Schiaffini, J. Berner, B. Bonev, T. Kurth, K. Azizzadenesheli, and A. Anandkumar. Neural operators with localized integral and differential kernels. arXiv. 2024.
Z. Ma, K. Azizzadenesheli, and A. Anandkumar. Calibrated uncertainty quantification for operator learning via conformal prediction. arXiv. 2024.