Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
AI2050 Project
We are entering a new era of AI dominated by foundation models (e.g., GPT-3) which are trained on broad data and can be adapted to a range of downstream tasks. Percy’s project will unpack foundation models with a focus on language models. He will develop metrics to characterize models from a sociotechnical point of view, perform experimentation to theorize how and why capabilities emerge from the training process, and create new foundation models that are more reliable, interpretable, modular, and efficient. Finally, Percy will reimagine what foundation models should look like from first principles with an eye towards implications on centralization of power.
Project Artifacts
S. Kapoor et al. On the societal impact of open foundation models. Stanford University Center for Research on Foundation Models. 2024.
R. Bommasani, K. Klyman, S. Longpre, B. Xiong, S. Kapoor, N. Maslej, A. Narayanan, and P. Liang. Foundation model transparency reports. arXiv. 2024.
Lee et al. Holistic evaluation of text-to-image models. NeurIPS. 2023.
D. Narayanan, K. Santhanam, P.Henderson, R. Bommasani, T. Lee, and P. Liang. Cheaply evaluating inference efficiency metrics for autoregressive transformer APIs. NeurIPS. 2023.
R. Bommasani, K. Klyman, S. Longpre, S. Kapoor, N. Maslej, B. Xiong, D. Zhang, and P. Liang. The foundation model transparency index. Stanford University Center for Research on Foundation Models. 2023.
P. Liang et al. Holistic evaluation of language models. arXiv. 2023.
C. Toups, R. Bommasani, K. Creel, S. Bana, D. Jurafsky, and P. Liang. Ecosystem-level analysis of deployed machine learning reveals homogeneous outcomes. arXiv. 2023.
R. Bommasani, P. Liang, and T. Lee. Holistic evaluation of language models. Ann NY Acad Sci. 2023.
N. Liu, T. Zhang, and P. Liang. Evaluating verifiability in generative search engines. arXiv. 2023.
R. Bommasani, P. Liang, and T. Lee. Language models are changing AI: the need for holistic evaluation. Stanford University Center for Research on Foundation Models. 2022.
AI2050 Community Perspective — Percy Liang (2023)