Nicolas Papernot
Appointment
Canada CIFAR AI Chair
Pan-Canadian AI Strategy
About
Appointed Canada CIFAR AI Chair – 2019
Nicolas Papernot is a Canada CIFAR AI Chair at the Vector Institute, an assistant professor in the Department of Electrical and Computer Engineering, Department of Computer Science, and Faculty of Law at the University of Toronto, and a faculty affiliate at the Schwartz Reisman Institute.
Papernot’s research interests span the areas of computer security and privacy in machine learning. Together with his collaborators, he demonstrated the first practical black-box attacks against deep neural networks. His work on differential privacy for machine learning, involving the development of a family of algorithms called Private Aggregation of Teacher Ensembles (PATE), has made it easy for machine learning researchers to contribute to differential privacy research. If you would like to learn more about his group’s research, the following blog posts on cleverhans.io are a good reference: proof-of-learning, collaborative learning beyond federation, dataset inference, machine unlearning, differentially private ML, or adversarial examples.
Awards
- McCharles Prize for Early Career Research Distinction, 2024
- AI2050 Early Career Fellow, Schmidt Sciences, 2024
- Spotlight Paper Award, ICLR, 2024
- Oral Paper Award, ICLR, 2023
- College of New Scholars,, Royal Society of Canada, 2023
- Alfred P. Sloan Research Fellow, 2022
- Early Career Research Award, Ministry of Colleges and Universities, 2022
- Outstanding Paper Award, ICLR, 2022
- Best Paper Award, ICLR, 2017
- Google PhD Fellowship in Security, 2016
Relevant Publications
- Thudi, A., Jia, H., Meehan, C., Shumailov, I., & Papernot, N. (2023). Gradients look alike: Sensitivity is often overestimated in DP-SGD. In Proceedings of the 33rd USENIX Security Symposium.
- Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: Training on generated data makes models forget.
- Boenisch, F., Dziedzic, A., Schuster, R., Shahin Shamsabadi, A., Shumailov, I., & Papernot, N. (2021). When the curious abandon honesty: Federated learning is not private. In Proceedings of the 8th IEEE European Symposium on Security and Privacy, Delft, Netherlands.
- Maini, P., Yaghini, M., & Papernot, N. (2021). Dataset inference: Ownership resolution in machine learning. In Proceedings of the 9th International Conference on Learning Representations.
- Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C. A., Jia, H., Travers, A., Zhang, B., Lie, D., & Papernot, N. (2020). Machine unlearning. In Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA.
- Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K. (2017). Semi-supervised knowledge transfer for deep learning from private training data. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France.