Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • News
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • Global Call for Ideas
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
  • Support Us
  • About
    • Our Story
    • CIFAR 40
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Staff Directory
    • Equity, Diversity & Inclusion
  • fr
  • Home
  • Bio

Follow Us

nicolas-3-square2

Nicolas Papernot

Appointment

Canada CIFAR AI Chair

National Program Committee member

Pan-Canadian AI Strategy

Connect

University of Toronto

Google Scholar

About

Nicolas Papernot is a Canada CIFAR AI Chair at the Vector Institute, an assistant professor in the department of electrical and computer engineering at the University of Toronto, and a research scientist at Google Brain.

Papernot’s research interests span the areas of computer security and privacy in machine learning. Together with his collaborators, he demonstrated the first practical black-box attacks against deep neural networks. His work on differential privacy for machine learning, involving the development of a family of algorithms called Private Aggregation of Teacher Ensembles (PATE), has made it easy for machine learning researchers to contribute to differential privacy research. He also co-authored with Ian Goodfellow an open-source library called CleverHans, now widely adopted in the technical community to benchmark machine learning in adversarial settings.

Awards

  • Best Paper Award, ICLR, 2017
  • Google PhD Fellowship in Security, 2016

Relevant Publications

  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506-519).

  • Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses.

  • Papernot, N., McDaniel, P., & Goodfellow, I. (2016). Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.

  • Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372-387). IEEE.

  • Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP) (pp. 582-597). IEEE.

Institution

Google Brain

University of Toronto

Vector Institute

Department

Electrical and Computer Engineering

Education

  • PhD (Computing Science and Engineering), Pennsylvania State University
  • MSc (Engineering Sciences), Ecole Centrale de Lyon
  • BSc (Engineering Sciences), Ecole Centrale de Lyon

Country

Canada

Support Us

CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2023 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more