Aditi Raghunathan 2022 Early Career Fellow
Hard Problem Develop more capable and more general AI, that is useful, safe and earns public trust

Aditi Raghunathan is an Assistant Professor at Carnegie Mellon University. She is interested in building robust ML systems with guarantees for trustworthy real-world deployment. Previously, she was a postdoctoral researcher at Berkeley AI Research, and received her PhD from Stanford University in 2021. Her research has been recognized by the Arthur Samuel Best Thesis Award at Stanford, a Google PhD fellowship in machine learning, and an Open Philanthropy AI fellowship.

AI2050 Project

For AI systems to be beneficial in the real world, they need to work well on a wide range of conditions beyond the controlled settings they were developed on. For example, self-driving cars encounter unexpected construction zones, unpredictable or distracted drivers and inclement weather; predictive health-care systems run into unforeseen changes in demographics or medical equipment. Aditi’s work aims at creating robust AI systems that are guaranteed to work on a wide range of conditions—they would not make unexpected errors, would not encode and amplify harmful biases and spurious correlations, and would show graceful rather than precipitous degradation when faced with adversaries.

Project Artifacts

C. Baek, Z. Kolter, and A. Raghunathan. Why is SAM robust to label noise?. ICLR. 2024.

S. Goyal, P. Maini, Z. C. Lipton, A. Raghunathan, and J. Z. Kolter. Scaling laws for data filtering — data curation cannot be compute agnostic. arXiv. 2024.

T. Kim, S. Kotha, and A. Raghunathan. Jailbreaking is best solved by definition. arXiv. 2024.

J. M. Springer, S. Kotha, D. Fried, G. Neubig, and A. Raghunathan. Repetition improves language model embeddings. arXiv. 2024.

S. Garg, A. Setlur, Z. Lipton, S. Balakrishnan, V. Smith, and A. Raghunathan. Complementary benefits of contrastive learning and self-training under distribution shift. NeurIPS. 2023.

E. Kim, M. Sun, A. Raghunathan, and Z. Kolter. Reliable test-time adaption via agreement-on-the-line. NeurIPS. 2023.

AI2050 Community Perspective — Aditi Raghunathan (2023)

S. Kotha, J.M. Springer, and A. Raghunathan. Understanding catastrophic forgetting in language models via implicit inference. NeurIPS. 2023.

P. Maini, S. Goyal, Z. Lipton, J.Z. Kolter, and A. Raghunathan. T-MARS: improving visual representations by circumventing text feature learning. arXiv. 2023.