Mathias Lécuyer
Appointment
Solution Network Member
Canadian AI Safety Institute Research Program
Safeguarding Courts from Synthetic AI Content
About
Mathias Lécuyer is an assistant professor at the University of British Columbia. He works on trustworthy AI, on topics such as privacy, robustness, explainability and causality, with a specific focus on applications that provide rigorous guarantees. Recent impactful contributions include: the first scalable defence against adversarial examples (small changes to inputs that can control AI models predictions, and be used for AI jailbreaks) with provable guarantees; a technique to efficiently measure the influence of training data on AI model behavior; a method to audit privacy leakage from AI models given only API access; and a system to enable federated and privacy preserving measurements of advertising performance, now serving as the blueprint for a future standard aiming to reduce third party tracking on the web.
Awards
- Research Scholar Award, Google (2021)
Relevant Publications
- Kazmi, M., Lautraite, H., Akbari, A., Tang, Q., Soroco, M., Wang, T., ... & Lécuyer, M. (2024). Panoramia: Privacy auditing of machine learning models without retraining. Advances in Neural Information Processing Systems.
- Lyu, S., Shaikh, S., Shpilevskiy, F., Shelhamer, E., & Lécuyer, M. (2024). Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences. Advances in Neural Information Processing Systems.
- Tholoniat, P., Kostopoulou, K., McNeely, P., Sodhi, P. S., Varanasi, A., Case, B., ... & Lécuyer, M. (2024). Cookie Monster: Efficient On-Device Budgeting for Differentially-Private Ad-Measurement Systems. In Proceedings of the ACM SIGOPS Symposium on Operating Systems Principles.
- Lin, J., Zhang, A., Lécuyer, M., Li, J., Panda, A., & Sen, S. (2022). Measuring the effect of training data on deep learning predictions via randomized experiments. In International Conference on Machine Learning.
- Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., & Jana, S. (2019). Certified robustness to adversarial examples with differential privacy. In IEEE symposium on security and privacy (SP).