By: Krista Davidson
19 Jan, 2021
Nidhi Hegde builds machine learning methods that allow AI technologies to thrive without compromising personal privacy.
Her research applies calibrated techniques to machine learning to understand how algorithms react and adapt to certain parameters around privacy. The right approach allows algorithms to optimally operate while also preserving privacy.
One of the biggest challenges we face today is defining privacy. Many companies rely on differential privacy, achieved by adding perturbation in model training. It works well for many applications but is less effective for others, such as in health care, where personal data may provide important insights. Hegde’s research also explores new definitions of privacy for such scenarios. Without innovative solutions to the problem, we could face more privacy breaches.
“There are many adverse impacts of privacy breaches that we need to take into account when building algorithms,” she explains. “Drawing a distinction between which information should be kept private and what can live in the public domain is important for keeping people safe and protecting them from the dangers of AI, such as unexpected privacy breaches.”
Hegde is also interested in how various ethics metrics interact with each other. For instance, whether privacy is a hindrance or an advantage for classifiers that assure fairness. She aims to build frameworks that allow for calibrated inclusion of such ethics components.
A faculty member of Amii, Hegde joined the University of Alberta’s Department of Computing in February 2020, right at the cusp of the COVID-19 pandemic. Hegde has spent most of her research career working in the area of algorithms for network science and machine learning in France, at companies such as Technicolor Paris Research Lab and Bell Labs for Nokia, before joining Borealis AI, RBC’s research institute, in 2018.
Hegde says she is honoured to be named a Canada CIFAR AI Chair, a role in which she hopes to be able to advance better understanding of privacy and fairness in AI among researchers and the public.
Mo Chen is teaching autonomous mobile robots how to respond to dangerous situations. He’s investigating how to operate these robots safely, and more importantly, how they can co-exist with humans and with each other, even if they are operated autonomously. It’s a challenge that could transform the transportation and delivery of products and goods, as well as dozens of other industries.
Using an approach called reachability analysis, he is able to quantify the set of configurations that signify imminent danger to robots. He combines classical methods of AI such as optimization with reinforcement learning (where an autonomous agent can learn from interacting with a controlled environment) to teach robots to avoid collisions with each other and their environment.
Chen also wants to make AI systems more human-centric and natural.
“I want to see the capability of robots acting naturally around people, and that involves safety, collision avoidance, intent prediction and decision-making,” he explains.
With human-intent prediction capabilities, robots know when and how to appropriately interact with humans. For example in shopping malls or airports, a robot might be able to detect when a person is lost, confused or requires assistance.
An Amii Fellow, Chen leads the Multi-Agent Robotic Systems Lab at Simon Fraser University, where he’s an assistant professor at the Department of Computing Science. His lab focuses on principled robotic decision-making and combines traditional analytical methods with modern data-driven techniques. He received his PhD in 2017 from the University of California, Berkeley.
Canada CIFAR AI Chair Rahul G. Krishnan is using AI to provide insight into how diseases behave, not in a single patient, but across entire populations.
Looking at the electronic records of patients, Krishnan develops probabilistic models that examine the progression of a disease over time and how it responds to different treatments. Understanding the manifestation of disease is useful for clinicians to determine whether certain patients should receive more targeted therapies. His approach could significantly decrease health care costs and provide insight into therapeutic treatments for chronic diseases such as cancer and diabetes.
For Krishnan, AI opens up opportunities to examine the bigger picture of a multi-faceted, complex health care system. In particular, Ontario’s single-payer system and diverse population provides ample opportunity to create a concise understanding of disease that will revolutionize future treatments.
“The fact that we have health care data that is representative of a diverse population, and not just of those who can afford health insurance, is critical to understanding the science behind disease and treatment,” Krishnan explains.
Krishnan completed his undergraduate studies at the University of Toronto and his PhD in 2020 at MIT, before working as a senior researcher at Microsoft Research New England. He is returning to Canada to take up his first faculty position with the University of Toronto as an assistant professor of Computer Science and Medicine.
David Rolnick is mobilizing researchers to use machine learning in the fight against climate change.
Rolnick is the co-founder of Climate Change AI, an initiative that brings together experts from industry, academia, and policy to use machine learning to help mitigate climate change and adapt to its consequences.
His work examines machine learning applications for electric grid management, climate and weather modeling, and biodiversity monitoring. Rolnick and his collaborators recently published a research paper entitled , which calls on the machine learning community to join the fight, and to collaborate with other disciplines and policymakers for change.
“It’s not a matter of whether or not climate change will happen, but how bad we allow it to become,” he says, explaining that it’s essential for experts and scientists with applicable skills to contribute to what is ultimately a global effort.
In addition to AI for climate change, Rolnick’s research focuses on the mathematical foundations of deep learning and neural networks. Rolnick is a faculty member of Mila and an assistant professor at McGill University’s School of Computer Science. He graduated with a PhD in Applied Math from MIT.