Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • News
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • Global Call for Ideas
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
  • Support Us
  • About
    • Our Story
    • CIFAR 40
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Staff Directory
    • Equity, Diversity & Inclusion
  • fr
  • Home
  • Bio

Follow Us

AndrewSaxe-WebRes_bw

Andrew Saxe

Appointment

CIFAR Azrieli Global Scholar 2020-2022

Learning in Machines & Brains

Connect

Website

About

The interactions of billions of neurons ultimately give rise to our thoughts and actions.

Remarkably, much of our behaviour is learned starting in infancy and continuing throughout our lifespan. Andrew Saxe is aiming to develop a mathematical toolkit suitable for analyzing and describing aspects of learning in the brain and mind. His current focus is on the theory of deep learning, a class of artificial neural network models that take inspiration from the brain. Alongside this theoretical work, he develops close collaborations with experimentalists to empirically test principles of learning in biological organisms.

Awards

  • Wellcome-Beit Prize, Wellcome Trust, 2019
  • Sir Henry Dale Fellowship, Wellcome Trust & Royal Society, 2019
  • Robert J. Glushko Outstanding Doctoral Dissertations Prize, Cognitive Science Society, 2016
  • NDSEG Fellowship, 2010

Relevant Publications

  • Saxe, A. M., McClelland, J. L., & Ganguli, S. (2019). A mathematical theory of semantic development in deep neural networks. Proceedings of the National Academy of Sciences, 116(23), 11537–11546. https://doi.org/10.1073/pnas.1820226116

  • Earle, A. C., Saxe, A. M., & Rosman, B. (2018). Hierarchical Subtask Discovery with Non-Negative Matrix Factorization. In Y. Bengio & Y. LeCun (Eds.), International Conference on Learning Representations.

  • Advani*, M., & Saxe*, A. M. (2017). High-dimensional dynamics of generalization error in neural networks. ArXiv.

  • Musslick, S., Saxe, A. M., Ozcimder, K., Dey, B., Henselman, G., & Cohen, J. D. (2017). Multitasking Capability Versus Learning Efficiency in Neural Network Architectures. Annual Meeting of the Cognitive Science Society, 829–834.

  • Saxe, A. M., McClelland, J. L., & Ganguli, S. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Y. Bengio & Y. LeCun (Eds.), International Conference on Learning Representations.

Institution

University College London

Department

Gatsby Computational Neuroscience Unit and Sainsbury Wellcome Centre

Education

  • PhD (Electrical Engineering), Stanford University
  • MS (Electrical Engineering), Stanford University
  • BSE (summa cum laude, Electrical Engineering), Princeton University

Country

United Kingdom

Support Us

CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2023 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more