Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr
  • Home
  • Bio

Follow Us

nicolas-3-square2

Nicolas Papernot

Appointment

Co-Director, Canadian AI Safety Institute Research Program (CAISI) at CIFAR

Canada CIFAR AI Chair

Pan-Canadian AI Strategy

Connect

University of Toronto

Google Scholar

About

Appointed Canada CIFAR AI Chair – 2019

Nicolas Papernot is a Canada CIFAR AI Chair at the Vector Institute, an assistant professor in the Department of Electrical and Computer Engineering, Department of Computer Science, and Faculty of Law at the University of Toronto, and a faculty affiliate at the Schwartz Reisman Institute. 

Papernot’s research interests span the areas of computer security and privacy in machine learning. Together with his collaborators, he demonstrated the first practical black-box attacks against deep neural networks. His work on differential privacy for machine learning, involving the development of a family of algorithms called Private Aggregation of Teacher Ensembles (PATE), has made it easy for machine learning researchers to contribute to differential privacy research. If you would like to learn more about his group’s research, the following blog posts on cleverhans.io are a good reference: proof-of-learning, collaborative learning beyond federation, dataset inference, machine unlearning, differentially private ML, or adversarial examples. 

Awards

  • McCharles Prize for Early Career Research Distinction, 2024
  • AI2050 Early Career Fellow, Schmidt Sciences, 2024
  • Spotlight Paper Award, ICLR, 2024
  • Oral Paper Award, ICLR, 2023
  • College of New Scholars,, Royal Society of Canada, 2023
  • Alfred P. Sloan Research Fellow, 2022
  • Early Career Research Award, Ministry of Colleges and Universities, 2022
  • Outstanding Paper Award, ICLR, 2022
  • Best Paper Award, ICLR, 2017
  • Google PhD Fellowship in Security, 2016

Relevant Publications

  • Thudi, A., Jia, H., Meehan, C., Shumailov, I., & Papernot, N. (2023). Gradients look alike: Sensitivity is often overestimated in DP-SGD. In Proceedings of the 33rd USENIX Security Symposium.
  • Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: Training on generated data makes models forget.
  • Boenisch, F., Dziedzic, A., Schuster, R., Shahin Shamsabadi, A., Shumailov, I., & Papernot, N. (2021). When the curious abandon honesty: Federated learning is not private. In Proceedings of the 8th IEEE European Symposium on Security and Privacy, Delft, Netherlands.
  • Maini, P., Yaghini, M., & Papernot, N. (2021). Dataset inference: Ownership resolution in machine learning. In Proceedings of the 9th International Conference on Learning Representations.
  • Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C. A., Jia, H., Travers, A., Zhang, B., Lie, D., & Papernot, N. (2020). Machine unlearning. In Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA.
  • Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K. (2017). Semi-supervised knowledge transfer for deep learning from private training data. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France.

Institution

Schwartz Reisman Institute

University of Toronto

Vector Institute

Department

Electrical and Computer Engineering, Computer Science, Law

Education

  • PhD (Computing Science and Engineering), Pennsylvania State University
  • MSc (Engineering Sciences), Ecole Centrale de Lyon
  • BSc (Engineering Sciences), Ecole Centrale de Lyon

Country

Canada

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR footer logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

Fields marked with an * are required

Je préfère m’inscrire en français (cliquez ici).


Subscribe to our CIFAR newsletters: *

You can unsubscribe from these communications at any time. View our privacy policy.


As a subscriber you will also receive a digital copy of REACH, our annual magazine which highlights our researchers and their breakthroughs with long-form features, interviews and illustrations.


Please provide additional information if you would like to receive a print edition of REACH.


This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more

Notifications