By: Justine Brooks, Filippo Sposini
4 Mar, 2025
Seven new projects have been funded as part of the 2024-2025 round of CIFAR AI Catalyst Grants, a program designed to catalyze new research areas and collaborations in machine learning. Grants are funded for up to $50,000 per year for up to two years to support collaborative research and exchange between Canada CIFAR AI Chairs and other researchers. Four smaller grants were also awarded in support of conference and workshop activities, particularly those that support next-generation researchers.
Adapting lighting to improve plant growth via reinforcement learning
Adam White (Amii, University of Alberta), Glen Uhrig (University of Alberta), Mo Chen (Amii, Simon Fraser University)
Canada CIFAR AI Chairs Adam White and Mo Chen, along with Associate Professor at the University of Alberta Glen Uhrig, are harnessing reinforcement learning (RL) for optimizing precision lighting in controlled growth environments. This project aims to provide RL systems with real-world applications in horticulture by dynamically adjusting light conditions to maximize plant growth. By exploring novel RL algorithms, uncovering optimal growth conditions for various crops and creating commercialization-ready solutions, this work could revolutionize vertical farming, improve food security and advance AI’s role in scientific discovery.
Loss of plasticity in biological and neural networks: explanations and solutions
Alona Fyshe (Amii, University of Alberta), Marlos C. Machado (Amii, University of Alberta), Eilif Muller (Mila, Université de Montréal)
Loss of plasticity, the diminishing ability to adapt and retain performance, is a fundamental challenge in both biological and artificial neural networks. Canada CIFAR AI Chairs Alona Fyshe, Eilif Muller and Marlos C. Machado are leading a collaborative effort to address this issue. Drawing inspiration from biological systems, their project investigates synaptic plasticity models, bistable neural connections, and transferable skill-building training regimes. By integrating insights from neuroscience, cognitive science and machine learning, the team aims to develop adaptive learning systems that retain flexibility while improving task performance across evolving environments.
A benchmark dataset for evaluating privacy risks in synthetic health data
Linglong Kong (Amii, University of Alberta), Khaled El Emam (University of Ottawa)
As the adoption of synthetic data generation (SDG) grows, understanding its privacy risks becomes critical. Canada CIFAR AI Chair Linglong Kong and Professor at the University of Ottawa Khaled El Emam are developing a benchmark dataset to evaluate privacy in SDG models, optimized to detect weaknesses and improve privacy preservation. This project will enable standardized comparisons of SDG models, facilitating robust assessments of their utility and privacy performance in healthcare and beyond.
Democratizing human-AI engagement: aligning interactions with values, preferences, cultures and contexts
Golnoosh Farnadi (Mila, McGill University), Amir-Hossein Karimi (University of Waterloo), Igor Grossmann (University of Waterloo)
As AI systems become more embedded in daily life, ensuring they align with human values, preferences and ethical principles is critical for trust and societal impact. Canada CIFAR AI Chair Golnoosh Farnadi, alongside Amir-Hossein Karimi and Igor Grossmann, is developing AI alignment mechanisms that enhance human-AI interactions by adapting to personal values, managing AI-to-AI cooperation and incorporating cultural and contextual nuances. This interdisciplinary research integrates machine learning with insights from psychology and ethics to create AI that is not only technically robust but also socially responsible. By advancing personalized AI alignment, fostering fair multi-agent interactions and improving AI’s contextual awareness, this project aims to make AI engagement more democratic, equitable and responsive to diverse societal needs.
Neural scaling laws and compute-optimal frontiers
Courtney Paquette (Mila, McGill University), Murat Erdogdu (Vector Institute, University of Toronto), Elliot Paquette (McGill University)
The rise of large language models has revolutionized optimization, introducing critical questions about scaling laws and compute efficiency. Canada CIFAR AI Chairs Courtney Paquette, Murat Erdogdu and Associate Professor at McGill University Elliot Paquette are exploring how to optimize model size, architecture and hyperparameters within fixed compute budgets. Their work aims to establish theoretical foundations for compute-optimal curves, minimizing training costs while maximizing performance. This research could redefine compute efficiency, particularly for non-industry machine learning teams constrained by limited resources.
Simplicial flow matching for retrosynthesis
Guy Wolf (Mila, Université de Montréal), Renjie Liao (Vector Institute, University of British Columbia)
Retrosynthesis, the process of breaking down target molecules into simpler reactants, is essential in drug discovery but remains a complex challenge. Canada CIFAR AI Chairs Guy Wolf and Renjie Liao are developing a novel, end-to-end approach using simplicial flow matching. This method leverages graph-based representations of molecules to address one-to-many mappings and improve sampling efficiency. Their innovative framework aims to simplify retrosynthesis pipelines, enabling more accurate and efficient predictions while expanding the capabilities of flow matching models for broader applications in molecular generation.
Solving adversarial examples with DP-guided diffusion models
Geoff Pleiss (Vector Institute, University of British Columbia), Mathias Lécuyer (University of British Columbia), Nidhi Hegde (Amii, University of Alberta)
Deep learning models have transformed numerous applications, but their vulnerability to adversarial attacks remains a critical challenge. Canada CIFAR AI Chairs Geoff Pleiss and Nidhi Hegde will work with Assistant Professor at the University of British Columbia Mathias Lécuyer leveraging denoising diffusion models guided by differential privacy to enhance provable robustness in AI systems. Their work explores innovative methods to adaptively refine inputs, design efficient robustness guides and tailor robustness guarantees based on input complexity. This approach has broad implications, from improving fairness and data privacy to safeguarding foundation models against adversarial threats.
In addition, four proposals were also accepted in support of workshop and conference activities.
Notifications