By: Justine Brooks
30 Apr, 2025
As AI continues to rapidly evolve, organizations and government bodies are increasingly seeking reliable guidance to shape policies and practices that can keep pace with technological change.
The AI Insights for Policymakers program, a joint initiative between Mila and CIFAR, aims to fulfill this need by connecting public organizations with leading AI researchers. Through quarterly office hours and policy feasibility testing, the program enables informed decision-making grounded in expert insight.
A co-chair of the AI Insights for Policymakers program alongside Blake Richards, Nidhi Hegde, plays a central role in shaping the program’s approach to engaging with diverse public stakeholders.
“Generally speaking, these groups are trying to understand how to integrate AI into their work, either through using AI to optimize their work, create solutions that they didn’t have before, or to understand how to prepare for the impacts of AI on their job,” explained Hegde, a Canada CIFAR AI Chair at Amii and associate professor at the University of Alberta.
The program provides a much-needed source of information to help policymakers understand the implications of AI adoption before they develop guidelines and policies. “We can’t cover everything because AI is moving so quickly, so this is exactly what people need – some sort of expertise to guide them on where to go,” Hegde says.
The user groups range in size and industry as well as where they are in their AI journey. “There are some groups that are at the very beginning, and they really just want some sort of reassurance that these are general directions you can go in. And then we also have groups that have progressed further along in their own AI education.”
Hegde recalls one group focused on children’s online safety who came prepared with plenty of their own research and specific questions. “It’s nice to see that when there are areas that have such significant impacts, they have already done a lot of the work to try to understand the effects of AI.”
Hegde brings a unique perspective to the expert group through her research on privacy and robustness in machine learning. Her expertise on privacy provides knowledge on state of the art tools, their accuracy and what mechanisms work in which settings. “You may have two different groups that have very different ways of functioning. They may be working with a lot of different kinds of data and may have the same issue of privacy, but it manifests very differently in the two groups because of the domain that they’re operating in and the kind of data that they have,” she explained.
Her research on robustness – an idea in machine learning that refers to an algorithm’s ability to adapt beyond its training data – has yielded surprising results. “Just by focusing on robustness itself, we end up finding that all of these issues like privacy and fairness can also be addressed.” While her goal is to develop these algorithms on a fundamental level, they have a wide range of applications including de-biasing a large language model.
Hegde credits CIFAR support with providing her the freedom to explore high-risk ideas, such as her work on robustness. “A lot of other types of grants, you specify a certain project or a program. So, if you want to try something new, then you need to develop it to an extent before you use it as an area that you want to develop more. CIFAR allowed me to try these new ideas without having to go through the standard funding model for them.”
Through Hegde’s leadership in the AI Insights for Policymakers program and her boundary-pushing research, Hegde is helping organizations better understand and prepare for an AI future.