Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute, and is a faculty affiliate at the Schwartz Reisman Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include generative model collapse, cryptographic auditing of ML, private learning, proof-of-learning, and machine unlearning. Nicolas is an Alfred P. Sloan Research Fellow in Computer Science and a Member of the Royal Society of Canada’s College of New Scholars. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He co-created the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) and is co-chairing its first two editions in 2023 and 2024. He previously served as an associate chair of the IEEE Symposium on Security and Privacy (Oakland), and an area chair of NeurIPS. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.
AI2050 Project
Companies and countries training AI models increasingly face scrutiny from end users around the risks of deploying AI. Acknowledging that AI poses risks to society, it is a reasonable expectation that a regulatory body should produce technical specifications to curb the societal risks of AI models. In addition to the difficulties faced when defining properties AI systems should meet, auditing these properties remains out of reach at the scale needed to regulate AI internationally. Nicolas Papernot’s AI2050 project addresses this hard problem through a combination of AI and cryptographic advances that will lay the foundations for verifiable AI treaties that benefit all.
Project Artifacts
D. Glukhov, Z. Han, I. Shumailov, V. Papyan, N. Papernot. A False Sense of Safety: Unsafe Information Leakage in ‘Safe’ AI Responses. arXiv. 2024.
P. Maini, H. Jia, N. Papernot, A. Dziedzic. LLM Dataset Inference: Did you train on my dataset?. arXiv. 2024.