Adam Oberman is a Canada CIFAR AI Chair at Mila and a professor in the department of mathematics and statistics at McGill University, and director of the Applied Mathematics Laboratory at the Centre de Recherches Mathématiques.
Oberman’s current research focuses on mathematical approaches to machine learning: stochastic optimization, regularization approaches, and robust models. He has worked on numerical methods for partial differential equations, and for optimal transportation. He teaches a machine learning theory course, including generalization theory, and a scientific computing course with a focus on high dimensional methods.
- Simons Fellowship, Simons Foundation, 2017
- Monroe H. Martin Prize, Institute for Physical Science and Technology, 2010
- Early Career Award, CAIMS-PIMS, 2010
Finlay, C., Jacobsen, J. H., Nurbekyan, L., & Oberman, A. M. (2020). How to train your neural ODE: the world of Jacobian and kinetic regularization. arXiv, arXiv-2002.
Laborde, M., & Oberman, A. (2020). Nesterov’s method with decreasing learning rate leads to accelerated stochastic gradient descent. In International Conference on Artificial Intelligence and Statistics (pp. 602-612). PMLR.
Chaudhari, P., Oberman, A., Osher, S., Soatto, S., & Carlier, G. (2018). Deep relaxation: partial differential equations for optimizing deep neural networks. Research in the Mathematical Sciences, 5(3), 30.
Benamou, J. D., Froese, B. D., & Oberman, A. M. (2014). Numerical solution of the optimal transportation problem using the Monge–Ampère equation. Journal of Computational Physics, 260, 107-126.
Oberman, A. M. (2006). Convergent difference schemes for degenerate elliptic and parabolic equations: Hamilton–Jacobi equations and free boundary problems. SIAM Journal on Numerical Analysis, 44(2), 879-895.
CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.