By: Justine Brooks
12 Dec, 2024
Last month, the Government of Canada announced the creation of the Canadian AI Safety Institute (CAISI), and the joining of the International Network of AI Safety Institutes alongside several other national institutes working together on this critical issue. Also announced was the CAISI Research Program at CIFAR, which will leverage CIFAR’s community of experts to advance understanding of the risks associated with advanced AI systems, as well as our capacity to detect and mitigate them.
The CAISI Research Council will advise and guide CIFAR in delivering the CAISI Research Program and will be composed of research experts and representatives from CIFAR, the National AI Institutes and the National Research Council. CIFAR has named Canada CIFAR AI Chairs Nicolas Papernot and Catherine Régis as the Co-Directors of the CAISI Research Program, responsible for chairing the CAISI Research Council and providing scientific leadership to the CAISI Research Program.
Professor Papernot is a Canada CIFAR AI Chair at the Vector Institute, an assistant professor in the Departments of Electrical and Computer Engineering, Computer Science and the Faculty of Law at the University of Toronto, and is a faculty affiliate at the Schwartz Reisman Institute for Technology and Society. His research focuses on the privacy and security of machine learning systems.
Professor Régis is a Canada CIFAR AI Chair at Mila — Québec’s AI Institute, a Canada Research Chair, a full professor at the Faculty of Law at the University of Montréal and the director of Social Innovation and International Policy at IVADO. Her research focuses on global AI governance, AI regulation in healthcare and human rights approaches in AI.
“In many ways, unsafe AI would be unsustainable at local and international levels, so AI safety work is not only a necessity to protect our social advances, but also essential for AI itself to remain a meaningful technology.”
— Nicolas Papernot and Catherine Régis
In a Q&A with CIFAR, Professors Papernot and Régis discuss the importance of CAISI and Canada’s opportunity to be a world leader in AI safety.
CIFAR: What does AI safety mean to you, and why is it important for the field of AI overall?
Both: AI is increasingly present in many sectors of society, yet it is still being actively developed. This raises a number of questions and challenges at different time horizons, which we need to tackle if we want to ensure that applications of AI are truly beneficial to our societies. In many ways, unsafe AI would be unsustainable at local and international levels, so AI safety work is not only a necessity to protect our social advances, but also essential for AI itself to remain a meaningful technology.
CIFAR: Tell me about your own work on AI safety.
Papernot: My group’s founding motivation is to study the limitations of AI algorithms, which inform the design of approaches to AI that are more robust, privacy-preserving, fair and transparent. We, in particular, place significant emphasis on evaluating the performance of AI algorithms in the presence of malicious entities. This gives us a worst-case perspective on AI performance, which helps us understand how they can be deployed more responsibly. For example, our research on privacy-preserving AI led us to research machine unlearning — how to remove data from AI systems — which is now an essential consideration for deploying large-scale AI systems like chatbots.
Régis: As a legal scholar, it is a quite instinctive posture to anticipate risks and find ways to minimize them, which is why most of my research and policy work pursues this objective. I regularly collaborate with interdisciplinary and multi-stakeholder teams to identify, analyze and mitigate possible human rights infringements that AI systems may entail for instance. I am also interested in conceptualizing and implementing monitoring and remedy mechanisms spanning the entire AI lifecycle. This ensures that if a legal or social risk materializes (for our democratic institutions, for instance), we can adequately react to prevent further prejudices. I am especially interested in the health care field, where AI can play an important role in improving patient health, life and quality of services, but where risks can be significant if not properly governed. I also work on global AI governance approaches (norms, institutions, interstate collaboration mechanisms, etc.), as the capacity to reach coordinated actions at that level is essential to minimize market concentration, define common red lines across countries, ensure the respect of human rights and reduce the AI divide.
CIFAR: What do you hope to bring to your new role as Co-Directors? What excites you about this new opportunity?
Both: CIFAR has a history of developing research programs with significant impact; AI is, in fact, one of the fields CIFAR pioneered. As Co-Directors of the CAISI Research Program, we look forward to working together to design a research program that is an enabler of Canadian research on AI safety. We hope to create a community of scholars that ensures all perspectives are represented as we develop technological and socio-technical answers to the big societal challenges that AI raises.
CIFAR: Why is CAISI important?
Papernot: AI innovation is quickly being adopted and deployed in real-world settings. This calls for a central institute, CAISI, to coordinate and quickly disseminate the work of both researchers and practitioners — such that they can inform one another’s work. CAISI will also increase the visibility of Canadian innovation in safe AI on the international scene and facilitate cooperation with like-minded countries.
Régis: Through active collaboration with the Canadian research ecosystem and other national AI safety institutes, CAISI aims to equip the Canadian government and Canadians more broadly with a deeper understanding of AI risks and practical tools to guide safe AI development and deployment. This pivotal initiative will allow Canada to speed-up and amplify its interdisciplinary research efforts on safe AI and to contribute to global initiatives in this area.
“With its global reputation for responsible AI leadership, Canada is well positioned to play an important role in shaping our capacity to understand and mitigate AI risks, ultimately allowing us to reap the significant benefits this technology offers.”
— Catherine Régis
CIFAR: What do you see as Canada’s opportunity for leadership in AI safety on the global stage?
Papernot: Canadian society has a strong set of democratic values. Furthermore, Canada’s AI ecosystem is tightly-knit across academia, industry, healthcare, governmental and legislative stakeholders. This enables Canadian researchers to take a truly interdisciplinary approach to their research, which will be key to safely advancing the science of AI and deploying the technology responsibly.
Régis: The rapid advancement of AI calls for major international collaboration to ensure this transformative technology serves the best interests of people and communities worldwide, with human rights serving as a guiding framework. Canada has been at the forefront of these discussions for several years, exemplified by its leadership in the creation of the Global Partnership on AI in 2020. With its global reputation for responsible AI leadership, Canada is well positioned to play an important role in shaping our capacity to understand and mitigate AI risks, ultimately allowing us to reap the significant benefits this technology offers.
The CAISI Research Council is currently recruiting for three additional members at large with applications closing December 16. The Council will be meeting in January and monthly thereafter to set priorities and make funding decisions for upcoming activities such as Catalyst Grants and Solution Networks. These activities will convene and fund a range of disciplinary perspectives to address matters of AI safety. Additionally, CIFAR and the University of Waterloo Cybersecurity and Privacy Institute will be partnering on a special issue of Canadian Public Policy on “AI Safety and Public Policy” in March 2025.