Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr
CIFAR Pan-Canadian AI Strategy

Reducing discrimination and bias in AI: Q&A with Golnoosh Farnadi

By: Krista Davidson
17 Nov, 2021
November 17, 2021
Golnoosh Farnadi

Canada CIFAR AI Chair Golnoosh Farnadi on what fairness in AI means, the importance of keeping humans in the loop of AI, and her goal for a more equitable world. Farnadi is a core faculty member at Mila, an assistant professor at HEC Montréal, and an adjunct professor at Université de Montréal.

What is your field of AI research and how did you become interested?

I work in trustworthy AI. I did my PhD in user profiling for social media. At the end of my PhD I was contacted by companies that wanted to use my software for hiring purposes. I was disappointed that people were getting access to research to do things that were not ethical from my perspective. At the end of my PhD, I wrote a chapter about the negative consequences of AI, such as discrimination and bias. It made me think that if we’re not careful, we will be facing a future that isn’t ideal for many people. That experience inspired me to change my topic to fairness in AI and algorithmic discrimination.

We hear terms such as fairness in AI, algorithmic bias and discrimination in decision-making models, but what do they mean?

Fairness in AI is relatively a new topic while fairness in decision-making systems has a long history. One of the main reasons that we have cared about fairness in automated decision-making systems is laws against discrimination. However, it is very challenging to convert legal definitions of fairness to mathematical notations that we can use in AI and for this reason we have many definitions of fairness based on the context.

What I’m interested in is measuring discrimination with respect to the context, and reducing discrimination at different points of the pipeline. There are several components in the AI pipeline: the data, the model and the outcome. Bias and discrimination can happen anywhere along this pipeline. In fairness-aware learning, discrimination prevention aims to remove discrimination by modifying the biased data and/or the predictive algorithms built on the data. On the data side, for instance, bias could result from an unbalanced dataset where the majority of the data describes only one group.

Machine learning models can also be discriminatory. As researchers, we design models to be performance-oriented, and there are metrics that we use to evaluate how good the model is. A model will mimic the patterns found in the data. So with historical discrimination in datasets, you’ll see an algorithm gives more opportunities and favoured outcomes to an advantaged group while perform poorly for minority groups such as women or people of color.

What are some of the examples you’re seeing of algorithmic bias and unfairness in AI?

There was a famous study of facial recognition technologies that was conducted by Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media Group.  She tested different commercialized facial recognition apps and realized that they couldn’t recognize her face. The problem wasn’t the software — it detected the faces of white men quite well, but performed poorly in recognizing the faces of black women. This is because the data used to train the model was based on the faces of white men.

Another example of bias is an AI-based recruitment tool by Amazon that rated women poorly on the hiring scale based on historical data and tended to favor applicants that had more technical experience. The tool used a model that was trained to vet applications that observed resume patterns over a ten-year period. Unfortunately the model observed that men dominated technical positions during that time, which led women to be rated on the lower end of the scale.

All of these examples could have been detected and maybe even prevented if we had a human in the loop.

Are there any potential solutions for ensuring AI systems are trustworthy and fair?

If there’s an application that uses AI to make decisions, then we need a human in the loop. The human needs to be aware of what biases might exist in the system and not rely too heavily on trusting the system implicitly. If you look at areas such as in the legal system, AI shouldn’t be the sole decision-maker in terms of deciding whether someone should be imprisoned and for how long the jail sentence should be. We need humans overseeing those decisions, and to overturn ones that are unfair.

Fairness in AI is an interdisciplinary field and it’s context dependent. Computer scientists have to work with legal experts in the relevant fields of health, business, education, law, etc.

Many companies don’t want to share their data or algorithms, and they don’t want to take on the risk that a researcher may discover bias or discrimination in their agorithms. My hope is that we will have more regulations around AI. One solution, for example, involves AI auditors that work with the company to try and detect and reduce discrimination and bias.

Who is at risk if we fail to seriously examine the role of fairness and trustworthiness in AI?

Everyone, because you don’t know when, how, or who an algorithm will discriminate against. Obviously minority groups such as women are at risk in hiring applications, and Black people are at risk in policing and law enforcement tools.

Why is this work important to you?

I’m fighting for an ideology of an equitable society. Maybe I will not see that in my lifetime but I’m fighting for it, and that’s what’s giving me the energy to work on this topic. Hopefully the situation will be better for our children.

How is your role as a Canada CIFAR AI Chair and a core faculty member of Mila helping you move this research forward?

I’m grateful to be in Canada and have this opportunity to work on something I believe in, to have this support from CIFAR and the government, to hire people to work on this project, and also the network of talented researchers that I have access to. I’ve dedicated my life to this area even before I began working on it. I chose this field of research because this is who I am.  I believe that we can have a future where we have access to all of this technology and that it can be fair. In the next five years I believe I can actually solve some of these issues.

  • Follow Us

Related Articles

  • CIFAR funds seven high-risk, high-reward AI projects
    March 04, 2025
  • Canada CIFAR AI Chairs gather in Banff for annual AICan meeting
    June 20, 2024
  • How does the brain give rise to the mind?
    June 13, 2024
  • Six new CIFAR AI Catalyst Grants awarded
    May 23, 2024

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR footer logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

Fields marked with an * are required

Je préfère m’inscrire en français (cliquez ici).


Subscribe to our CIFAR newsletters: *

You can unsubscribe from these communications at any time. View our privacy policy.


As a subscriber you will also receive a digital copy of REACH, our annual magazine which highlights our researchers and their breakthroughs with long-form features, interviews and illustrations.


Please provide additional information if you would like to receive a print edition of REACH.


This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more

Notifications