Yejin Choi 2024 Senior Fellow
Affiliation Incoming Professor, Stanford University Hard Problem Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI.

Yejin Choi is an incoming professor and senior fellow at Stanford University and a MacArthur Fellow. She was formerly the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research investigates a wide variety problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is named among Time100 Most Influential People in AI in 2023, and a co-recipient of 2 test-of-time awards (ACL 2021 and CVPR 2021) and 8 best and outstanding paper awards at ACL, EMNLP, NAACL, ICML, NeurIPS, and AAAI. She has also won the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.

AI2050 Project

With increased power and prevalence of AI systems, it is ever more critical that AI systems are designed to serve all, i.e., people with diverse values and perspectives. However, aligning models to serve pluralistic human values remains an open research question. Yejin’s AI2050 project proposes an ambitious research program that aims to correct this fundamental limitation of AI systems by pursuing five synergistic research threads: (1) A theoretical framework of pluralism, (2) pluralistic benchmarks and metrics, (3) pluralistic alignment methods, (4) ValueGenome as a catalog of diverse human values, and (5) an interpretable reflection process.