About
Appointed Canada CIFAR AI Chair – 2021
The recent success of machine learning has been in large part due to massive success in an area known as representation learning, where a computer algorithm finds a way to identify structure in data. Danica Sutherland’s research focuses on improvements to the process of representation learning, especially using ideas from a tool known as kernel methods. Integrating kernels into currently-popular approaches can, ideally, help learn effective representations with smaller training datasets, and which generalize well even to populations different from those used in training. A major line of her research focuses on representations which identify differences between datasets, such as whether medical images differ between treatment and control groups, or if a generative model has succeeded at matching its goal distribution. She tries to work both on practical problems informed by theoretical viewpoints, and on theoretical problems informed by practice.
Relevant Publications
Kamath, P., Tangella, A., Sutherland, D.J., & Srebro, N. (2021). Does Invariant Risk Minimization Characterize Invariance? Artificial Intelligence and Statistics.
Zhou, L., Sutherland, D.J., & Srebro, N. (2020). On Uniform Convergence and Low-Norm Interpolation Learning. Advances in Neural Information Processing Systems.
Liu, F., Xu, W., Lu, J., Zhang, G., Gretton, A., & Sutherland, D.J. (2020). Learning Deep Kernels for Non-Parametric Two-Sample Tests. International Conference on Machine Learning.
Bińkowski, M., Sutherland, D.J., Arbel, M., & Gretton, A. (2018). Demystifying MMD GANs. International Conference on Learning Representations.
Sutherland, D.J. & Schneider, J. (2015). On the Error of Random Fourier Features. Uncertainty in Artificial Intelligence.