Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • CIFAR Arrell Future of Food Initiative
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr

Follow Us

  • Home
  • ai
  • CIFAR AI Catalyst Grants

CIFAR AI Catalyst Grants

Accelerating interdisciplinary AI research

AI is transforming the world, across a wide range of domains, from healthcare to fundamental physics. But none of these applications are possible without the breakthroughs developed through fundamental, curiosity-driven, and long-term research.

CIFAR AI Catalyst Grants are designed to catalyze new research areas and collaborations in machine learning and its application to different areas of science and society.

If you have any questions about these and upcoming catalyst grant calls, please contact us at [email protected].

Catalyst Grants awarded to date

2025 awards

Combating Misinformation

CIPHER: Countering influence through pattern highlighting and evolving responses

Matthew E. Taylor (Amii, University of Alberta), Brian McQuinn (University of Regina) 

The rise in misinformation spurred by foreign interference in domestic politics has deleterious effects on the information ecosystem. Canada CIFAR AI Chair Matthew E. Taylor and Associate Professor at the University of Regina Brian McQuinn are developing innovative human-in-the-loop techniques, combining multi-modal AI with expert human input, to develop an innovative tool to detect foreign interference. Once trained, the CIPHER model will be deployed through a global network of civil society organizations to empower them to combat misinformation.

Adversarial robustness in knowledge graphs

Ebrahim Bagheri (University of Toronto), Jian Tang (Mila, HEC Montréal & McGill University) and Benjamin Fung (Mila, McGill University) 

The introduction of false or misleading information into knowledge graphs–the models that power search agents and conversational agents– has serious implications for AI safety, as it allows for misinformation to become embedded into models and spread widely. University of Toronto Professor Ebrahim Bagheri, Canada CIFAR AI Chair Jian Tang and McGill University Professor Benjamin Fung will design machine learning defenses to detect and mitigate adversarial modifications in knowledge graphs. By designing scalable adversarial training and robustness evaluation methodologies, their research will allow for practical deployment of safer knowledge graphs in the real world.

Trustworthy & Interpretable Large Language Models (LLMs)

Sampling latent explanations from LLMs for safe and interpretable reasoning

Yoshua Bengio (Mila, Université de Montréal) 

Ensuring that LLMs produce trustworthy and interpretable results is a major goal of AI safety researchers. Canada CIFAR AI Chair Yoshua Bengio will develop more trustworthy explanations of LLMs by deploying generative flow networks in a novel way. His focus is to train AI to explain what humans say, by looking at the hidden reasons behind AI decisions and evaluating their accuracy to disentangle the underlying causes behind what AI generates. Ultimately, this project aims to develop a monitoring guardrail for AI agents that can lead to safer AI deployment across many applications. 

On the safe use of diffusion-based foundation models

Mijung Park (Amii, University of British Columbia)

As generative foundation models are used in an increasing number of realms, concerns about privacy have accompanied their spread. Canada CIFAR AI Chair Mijung Park will address safety concerns related to diffusion models by using computationally-efficient and utility-preserving techniques. The project focuses on two important areas: not-safe-for-work (NSFW) content generation, and data privacy/memorization– reducing risks of models memorizing private information, like social security numbers, from training datasets. By developing techniques for removing problematic datapoints, they will aid in developing safer, privacy-preserving foundation models. 

Advancing AI alignment through debate and shared normative reasoning

Gillian Hadfield (Vector Institute, Johns Hopkins University, University of Toronto [on leave])

Aligning AI systems with human values is one of the key challenges of AI safety. Canada CIFAR AI Chair Gillian Hadfield will draw on the insights from economics, cultural evolution, cognitive science and political science to take a novel approach to the challenge of alignment. Using a debate framework, this project will assess and improve the normative reasoning skills of AI agents in a multi-agent reinforcement learning setting. The approach takes into account the pluralistic, heterogenous nature of human values and the recognition that normative institutions have developed in order to reconcile competing interests and preferences in ways that can address the challenge of alignment, and allow for the integrating of AI agents into human normative systems. 

Adversarial robustness of LLM safety

Gauthier Gidel (Mila, Université de Montréal)

Assessing the vulnerabilities of LLMs has become a key area of AI safety research. Canada CIFAR AI Chair Gauthier Gidel proposes a novel, more efficient and automated way of finding vulnerabilities in LLMs. By using optimization and borrowing methods from image-based adversarial attacks, the project aims to provide an efficient automatic attack model. This will allow model developers to improve the evaluations and training of LLMs, assessing their vulnerability and making them safer and more robust.

Ensuring Real-World Safety in AI Systems

Safe autonomous chemistry labs

Alán Aspuru-Guzik (Vector Institute, University of Toronto)

Self-driving laboratories have the potential to revolutionize science, yet without proper guardrails, there are safety risks. Canada CIFAR AI Chair Alán Aspuru-Guzik is developing a safety architecture for self-driving chemistry laboratories that draws inspiration from the aerospace industry. The safety framework will have three pillars: a physical black box device (similar to an airplane black box); multi-agent safety oversight systems; and the development of a digital twin to monitor environmental and laboratory conditions. Through these three pillars, Aspuru-Guzik aims to establish wide-spread safety benchmarks.

Safety assurance and engineering for multimodal foundation model-enabled AI systems

Foutse Khomh (Mila, Polytechnique Montréal), Lei Ma & Randy Goebel (Amii, University of Alberta)

Multi-modal foundation models are increasingly being deployed in the real world in a range of domains. Yet despite their importance, existing safety assurance approaches are not adequate for the complexity of multi-modal models. Canada CIFAR AI Chairs Foutse Khomh and Lei Ma  along with University of Alberta Professor Randy Goebel are developing end-to-end safety assurance techniques for multi-modal foundation models in several key areas of application: robotics, software coding and autonomous driving. They will develop benchmarks, testing and evaluation frameworks with the potential to improve the safety of foundation models in the real world.

Maintaining meaningful control: Navigating agency and oversight in AI-assisted coding

Jackie Chi Kit Cheung (Mila, McGill University), Jin Guo (McGill University) 

AI is increasingly being adopted by software engineers to generate, edit and debug code. Canada CIFAR AI Chair Jackie Chi Kit Cheung and Jin Guo will develop a safety framework for  software engineers to understand and control AI-supported coding systems. Their methodology entails gathering practitioner insights, co-designing interfaces and empirical testing. By incorporating human-computer interaction considerations, they aim to provide engineers with more control and insight into the operations of AI-supported coding systems.

Formalizing constraints for assessing and mitigating agentic risk

Sheila McIlraith (Vector Institute, University of Toronto) 

As AI agents are increasingly deployed in organizations in semi-autonomous fashions, concerns about the risks have accompanied their use. Canada CIFAR AI Chair Sheila McIlraith will develop concrete tools for a technical safety solution, combining approaches like context-specific evaluation, reward modeling and alignment. This project focuses on the use of Desired Behavior Specifications, which are encoded in representations in order to derive rules interpretible by humans – such as designing a system that can extract a set of formal rules from a training manual. Ultimately, by developing a distributed governance model to mitigate the risks of agentic AI, the project aims to further responsible AI deployment in industry.

2024 awards

Generating Images with Multimodal Instruction

Advancing novel machine learning methods and successful applications for a more generalized training image generation model.

Collaborators: Wenhu Chen (Canada CIFAR AI Chair, Vector Institute, University of Waterloo), Aishwarya Agrawal (Canada CIFAR AI Chair, Mila, Université de Montréal)

Fundamentals of Transformers: From Optimization to Generalization

Creating faster algorithms and more efficient architectures for concrete optimization and statistical guarantees of transformers, applicable to natural language processing, computer vision and time-series forecasting.

Collaborators: Murat Erdogdu (Canada CIFAR AI Chair, Vector Institute, University of Toronto), Christos Thrampoulidis (University of British Columbia)

Solving Simulators with Reinforcement Learning for Material and Process Design

Applying reinforcement learning optimization for less costly, more efficient material and process design simulators for clean fuel production.

Collaborators: Martha White (Canada CIFAR AI Chair, Amii, University of Alberta), Mouloud Amazouz (Natural Resources Canada; University of Waterloo), &Ahmed Ragab (Natural Resources Canada; Polytechnique Montréal)

AI Auditing through Exploration of Model Multiplicity

Generating new insights about how user inputs (“prompts”) personalize the behavior of foundation models, and suggesting ways forward for auditing all of the behaviors exhibited by a single foundation model.

Collaborators: Golnoosh Farnadi (Canada CIFAR AI Chair, Mila, McGill University), Elliot Creager (University of Waterloo)

Robust Strategic Classification and Causal Modelling for Long-Term Fairness

Addressing errors in decision-making algorithms by taking an alternative approach to fairness, the goal being to allow individuals labelled with ‘undesirable outcomes’ to achieve ‘desired’ outcomes long-term. 

Collaborators: Nidhi Hegde (Canada CIFAR AI Chair, Amii; University of Alberta) & Dhanya Sridhar (Canada CIFAR AI Chair, Mila; Université de Montréal)

 

Natural Language Processing for Users from Diverse Cultures

Evaluating the cultural awareness of current Western-centric LLMs with the goal of enhancing their cultural competence.

Collaborators: Vered Shwartz (Canada CIFAR AI Chair, Vector Institute, University of British Columbia), Siva Reddy (Canada CIFAR AI Chair, Mila, McGill University)

2022/2023 awards

Survival analysis with informative censoring

Improving statistical methods to make time-to-event predictions in the presence of right censored data, a field known as survival analysis, with myriad applications across industries.

Collaborators: Rahul G. Krishnan (Canada CIFAR AI Chair, Vector Institute, University of Toronto), Russ Greiner(Canada CIFAR AI Chair, Amii, University of Alberta)

Developing a framework for the evaluation of disclosure risks from tabular synthetic health data

Collaborators: Linglong Kong (Canada CIFAR AI Chair, Amii and University of Alberta), Khaled El Emam (University of Ottawa)

Hiccups on the road to Explainable Reinforcement Learning (XRL)

Advancing the emerging field of trustworthy machine learning by ensuring that DRL models are deployed in a way that reduces risks to Canadians and Canadian industries.

Collaborators: Samira Ebrahimi Kahou (Canada CIFAR AI Chair, Mila, McGill University), Marlos Machado (Canada CIFAR AI Chair, Amii, University of Alberta), Ulrich Aïvodji (ÉTS Montréal)

Human-machine co-adaptation in music improvisation via multi-agent RL

Designing agents that can improvise with humans as collaborative partners, capable of adapting to a musician's skill level and style, whether the musician is a novice or professional.

Collaborators: Cheng-Zhi Anna Huang (Canada CIFAR AI Chair, Mila, McGill University), Patrick M. Pilarski (Canada CIFAR AI Chair, Amii, University of Alberta)

An artificial intelligence-based MR imaging reconstruction framework

Collaborators: Mojgan Hodaie, Frank Rudzicz (Canada CIFAR AI Chair, Vector, Dalhousie), Timur Latypov, Marina Tawfik

Funded in partnership with Temerty Centre for AI Research and Education in Medicine

Explaining Explainability for Machine Learning Applications in STEM

Collaborators: Audrey Durand (Canada CIFAR AI Chair, Mila, Laval), Flavie Lavoie-Cardinal (Laval), Jess McIver (UBC), Renee Hlozek (CIFAR Azrieli Global Scholar, GEU), Ashish Mahabal (Cal Tech), Daryl Haggard (CIFAR Azrieli Global Scholar, GEU)

Culturally-Inclusive AI In Actua's Indigenous Youth in STEM Program

INDIGENOUS AI TRAINING GRANT

Collaborators: Valeria Ianniti (Actua)

Connecting Indigenous Youth and AI

INDIGENOUS AI TRAINING GRANT

Collaborators: Kate Arthur (Digital Moments)

Privacy-preserving generative models for retina image synthesis used for diagnosis purposes

SYNTHETIC HEALTH DATA CATALYST GRANT

Lead: Xiaoxiao Li, University of British Columbia in partnership with Roche.

Privacy-preserving data synthesis of a cohort to study and stimulate research on the opioid crisis in Canada

SYNTHETIC HEALTH DATA CATALYST GRANT

Lead: Sébastien Gambs, Université du Québec à Montréal in partnership with Statistics Canada.

Generation of confidentiality-preserving synthetic data from prescription drug consumption administrative databases for the analysis of drug use in the Quebec population

SYNTHETIC HEALTH DATA CATALYST GRANT

Co-leads: Christian Gagné, Université Laval in partnership with The Régie de l’assurance maladie du Québec.

A generator capable of creating images and associated labels for different types of images such as retina images, skin lesions and histopathology

SYNTHETIC HEALTH DATA CATALYST GRANT

Co-leads: Raymond Ng & Mathias Lecuyer, University of British Columbia Data Science Institute in partnership with Microsoft Research.

2020 awards

DeepCell: Analyze and integrate spatial single-cell RNA-seq data

Developing deep learning-based tools to analyze and integrate spatial single-cell RNA-seq data for brain tumours.

Collaborators: Bo Wang (Canada CIFAR AI Chair, Vector Institute, UHN, University of Toronto), Michael Taylor (University of Toronto, Sick Kids Hospital)

Rethinking generalization and model diagnostics in modern machine learning

Exploring the interesting properties of modern machine learning algorithms.

Collaborators: Murat Erdogdu (Canada CIFAR AI Chair, Vector Institute, University of Toronto), Ioannis Mitliagkas (Canada CIFAR AI Chair, Mila, Université de Montréal), Manuela Girotti (Mila, Concordia University)

Learning to solve mixed-integer linear programs

Utilizing machine learning for mixed-integer linear programming.

Collaborators: Laurent Charlin (Canada CIFAR AI Chair, Mila, HEC, Université de Montréal), Chris Maddison (Vector Institute, University of Toronto)

Language grounded in vision for embodied agent navigation and interaction

Enabling an intelligent agent the ability to understand natural language in the context of navigational tasks.

Collaborators: Chris Pal (Canada CIFAR AI Chair, Mila, Polytechnique Montréal, Université de Montréal), Sanja Fidler (Canada CIFAR AI Chair, Vector institute, University of Toronto), David Meger (Mila, McGill University)

Privacy and ethics in AI: Understanding the synergies and tensions

Exploring the tensions and synergies that can emerge in the deployment of Machine Learning algorithms, with a focus on accountability, transparency and bias.

Collaborators: Nicolas Papernot (Canada CIFAR AI Chair, Vector Institute, University of Toronto, Google), Sébastien Gambs (Université du Quebec)

Being politic smart in the age of misinformation

Using graph mining to detect and combat misinformation in mass information systems.

Collaborators: Reihaneh Rabbany (Canada CIFAR AI Chair, Mila, McGill University), André Blais (Université de Montréal, Royal Society of Canada), Jean-François Gagné (Université de Montréal), Jean-Francois Godbout (Université de Montréal)

Adaptive generative rhythmic models for neurorehabilitation

Exploring the benefits of sound and music, specifically rhythmic auditory stimulation (RAS) to Parksinson’s patients.

Collaborators: Sageev Oore (Canada CIFAR AI Chair, Vector Institute, Dalhousie University), Michael Thaut (Canada Research Chair, University of Toronto)

A reinforcement learning based system for automation level adaptation in automated vehicles for people with dementia

Advancing the field of human compatibility of AI as applied to individuals with dementia by using novel algorithms to facilitate compatibility.

Collaborators: Sarath Chandar (Canada CIFAR AI Chair, Mila, Polytechnique Montréal), Alex Mihailidis (University of Toronto, UHN)

Modeling embodied agents with Koopman Embeddings

Using dynamical systems to predict a future state of a system, and then control it.

Collaborators: Liam Paull (Canada CIFAR AI Chair, Mila, Université de Montréal), James Forbes (McGill University)

Contact us

Have questions about the Pan-Canadian Artificial Intelligence Strategy?

Email us

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more