By: Krista Davidson
17 Nov, 2021
What is your field of AI research and how did you become interested?
I work in trustworthy AI. I did my PhD in user profiling for social media. At the end of my PhD I was contacted by companies that wanted to use my software for hiring purposes. I was disappointed that people were getting access to research to do things that were not ethical from my perspective. At the end of my PhD, I wrote a chapter about the negative consequences of AI, such as discrimination and bias. It made me think that if we’re not careful, we will be facing a future that isn’t ideal for many people. That experience inspired me to change my topic to fairness in AI and algorithmic discrimination.
We hear terms such as fairness in AI, algorithmic bias and discrimination in decision-making models, but what do they mean?
Fairness in AI is relatively a new topic while fairness in decision-making systems has a long history. One of the main reasons that we have cared about fairness in automated decision-making systems is laws against discrimination. However, it is very challenging to convert legal definitions of fairness to mathematical notations that we can use in AI and for this reason we have many definitions of fairness based on the context.
What I’m interested in is measuring discrimination with respect to the context, and reducing discrimination at different points of the pipeline. There are several components in the AI pipeline: the data, the model and the outcome. Bias and discrimination can happen anywhere along this pipeline. In fairness-aware learning, discrimination prevention aims to remove discrimination by modifying the biased data and/or the predictive algorithms built on the data. On the data side, for instance, bias could result from an unbalanced dataset where the majority of the data describes only one group.
Machine learning models can also be discriminatory. As researchers, we design models to be performance-oriented, and there are metrics that we use to evaluate how good the model is. A model will mimic the patterns found in the data. So with historical discrimination in datasets, you’ll see an algorithm gives more opportunities and favoured outcomes to an advantaged group while perform poorly for minority groups such as women or people of color.
What are some of the examples you’re seeing of algorithmic bias and unfairness in AI?
There was a famous study of facial recognition technologies that was conducted by Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media Group. She tested different commercialized facial recognition apps and realized that they couldn’t recognize her face. The problem wasn’t the software — it detected the faces of white men quite well, but performed poorly in recognizing the faces of black women. This is because the data used to train the model was based on the faces of white men.
Another example of bias is an AI-based recruitment tool by Amazon that rated women poorly on the hiring scale based on historical data and tended to favor applicants that had more technical experience. The tool used a model that was trained to vet applications that observed resume patterns over a ten-year period. Unfortunately the model observed that men dominated technical positions during that time, which led women to be rated on the lower end of the scale.
All of these examples could have been detected and maybe even prevented if we had a human in the loop.
Are there any potential solutions for ensuring AI systems are trustworthy and fair?
If there’s an application that uses AI to make decisions, then we need a human in the loop. The human needs to be aware of what biases might exist in the system and not rely too heavily on trusting the system implicitly. If you look at areas such as in the legal system, AI shouldn’t be the sole decision-maker in terms of deciding whether someone should be imprisoned and for how long the jail sentence should be. We need humans overseeing those decisions, and to overturn ones that are unfair.
Fairness in AI is an interdisciplinary field and it’s context dependent. Computer scientists have to work with legal experts in the relevant fields of health, business, education, law, etc.
Many companies don’t want to share their data or algorithms, and they don’t want to take on the risk that a researcher may discover bias or discrimination in their agorithms. My hope is that we will have more regulations around AI. One solution, for example, involves AI auditors that work with the company to try and detect and reduce discrimination and bias.
Who is at risk if we fail to seriously examine the role of fairness and trustworthiness in AI?
Everyone, because you don’t know when, how, or who an algorithm will discriminate against. Obviously minority groups such as women are at risk in hiring applications, and Black people are at risk in policing and law enforcement tools.
Why is this work important to you?
I’m fighting for an ideology of an equitable society. Maybe I will not see that in my lifetime but I’m fighting for it, and that’s what’s giving me the energy to work on this topic. Hopefully the situation will be better for our children.
How is your role as a Canada CIFAR AI Chair and a core faculty member of Mila helping you move this research forward?
I’m grateful to be in Canada and have this opportunity to work on something I believe in, to have this support from CIFAR and the government, to hire people to work on this project, and also the network of talented researchers that I have access to. I’ve dedicated my life to this area even before I began working on it. I chose this field of research because this is who I am. I believe that we can have a future where we have access to all of this technology and that it can be fair. In the next five years I believe I can actually solve some of these issues.