
Nicolas Le Roux
About
Nicolas Le Roux is a Canada CIFAR AI Chair at Mila and an adjunct professor at McGill University’s School of Computer Science. He is leading the optimization effort as a research scientist at Google Brain Montreal.
Le Roux is interested in understanding how the interplay between noise and curvature affects convergence speed and generalization in stochastic optimization and understanding the dynamics of optimization in reinforcement learning, especially policy-gradient methods.
Awards
- Co-Recipient of the Lagrange Prize in Continuous Optimization, Society for Industrial and Applied Mathematics (SIAM), 2018
- Microsoft Research Fellowship, Darwin College, 2008-2010
Relevant Publications
Schmidt, M., Le Roux, N., & Bach, F. (2017). Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162(1-2), 83-112.
Roux, N. L., Schmidt, M., & Bach, F. (2012). A stochastic gradient method with an exponential convergence rate for finite training sets.
Schmidt, M., Roux, N. L., & Bach, F. (2011). Convergence rates of inexact proximal-gradient methods for convex optimization.
Le Roux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural computation, 20(6), 1631-1649.
Bengio, Y., Paiement, J. F., Vincent, P., Delalleau, O., Roux, N., & Ouimet, M. (2003). Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering. Advances in neural information processing systems, 16, 177-184.
Support Us
CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.