About
Adam Oberman’s current research focuses on mathematical approaches to machine learning: stochastic optimization, regularization approaches, and robust models. He has worked on numerical methods for partial differential equations, and for optimal transportation.
He teaches a machine learning theory course, including generalization theory, and a scientific computing course with a focus on high dimensional methods.
Awards
- Simons Fellowship, Simons Foundation, 2017
- Monroe H. Martin Prize, Institute for Physical Science and Technology, 2010
- Early Career Award, CAIMS-PIMS, 2010
Relevant Publications
- Finlay, C., Jacobsen, J. H., Nurbekyan, L., & Oberman, A. M. (2020). How to train your neural ODE: the world of Jacobian and kinetic regularization. arXiv, arXiv-2002.
- Chaudhari, P., Oberman, A., Osher, S., Soatto, S., & Carlier, G. (2018). Deep relaxation: partial differential equations for optimizing deep neural networks. Research in the Mathematical Sciences, 5(3), 30.
- Laborde, M., & Oberman, A. (2020, June). Nesterov's method with decreasing learning rate leads to accelerated stochastic gradient descent. In International Conference on Artificial Intelligence and Statistics (pp. 602-612). PMLR.
- Benamou, J. D., Froese, B. D., & Oberman, A. M. (2014). Numerical solution of the optimal transportation problem using the Monge–Ampère equation. Journal of Computational Physics, 260, 107-126.
- Oberman, A. M. (2006). Convergent difference schemes for degenerate elliptic and parabolic equations: Hamilton--Jacobi equations and free boundary problems. SIAM Journal on Numerical Analysis, 44(2), 879-895.
Support Us
CIFAR is a registered charitable organization supported by the governments of Canada, Alberta, Ontario, and Quebec as well as foundations, individuals, corporations, and international partner organizations.