Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • CIFAR Arrell Future of Food Initiative
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr
AI and Society

CIFAR launches new AI safety Networks to address synthetic evidence in the legal system and linguistic inequality

By: Justine Brooks
19 Nov, 2025
November 19, 2025
solnets-combined-article-1920x1080

Each Solution Network will receive $700,000 over two years to design, develop and implement AI safety solutions to pressing challenges

CIFAR has launched its first two AI safety Solution Networks under the Canadian AI Safety Institute (CAISI) Research Program at CIFAR. The two research teams – Safeguarding Courts from Synthetic AI Content and Mitigating Dialect Bias (the latter co-funded by the IDRC) – will spend the next two years developing and implementing open-source AI solutions to make AI safer and more inclusive for Canadians and the Global South. Each network is awarded $700,000 to support their groundbreaking research and development.

The Solution Networks are funded through the CAISI Research Program at CIFAR, an independent, multidisciplinary research arm led by CIFAR. The dedicated research program is a core component of the Government of Canada’s Canadian AI Safety Institute, launched in November 2024 with a $50 million investment to address the evolving risks of AI to Canadians.

“AI safety is crucial as the technology becomes more deeply embedded in how we live and work. At its core, it’s about two things — building trust and developing the tools to uphold it,” says the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario. “Trust that AI will be used responsibly, and tools that make it safer, fairer, and more transparent. These new Solution Networks show how Canadian researchers are advancing the science of safety itself — turning ideas into real solutions that make AI work for people.”

“CIFAR’s Solution Networks provide a unique approach to trustworthy AI research and development, bringing together exceptional teams of interdisciplinary researchers – who might not otherwise cross paths – to address issues of global importance, but more importantly, to design, develop and implement solutions,” says Elissa Strome, Executive Director, Pan-Canadian AI Strategy at CIFAR. “Core to the work of both of these Solution Networks is exploring ways to mitigate the potential harms of AI to people in Canada and around the world.”

Safeguarding Courts from Synthetic AI Content

Solution Network Members

  • Ebrahim Bagheri, Solution Network Co-director (University of Toronto)
  • Maura R. Grossman, Solution Network Co-director (University of Waterloo, Osgoode Hall Law School (York University), Vector Institute)
  • Karen Eltis, Solution Network Member (University of Ottawa)
  • Jacquelyn Burkell, Solution Network Member (Western University) 
  • Vered Shwartz, Solution Network Member (University of British Columbia, Canada CIFAR AI Chair, Vector Institute) 
  • Yuntian Deng, Solution Network Member (University of Waterloo) 

Co-directed by Ebrahim Bagheri and Maura R. Grossman, this Solution Network aims to address the rising prevalence of synthetic AI-generated content in the justice system. This includes fake image or video evidence generated by people using AI tools, but also court documents that are created using large language models (LLMs) such as ChatGPT that may produce hallucinations.

“The issue now is that you can do this at scale and at convenience,” Bagheri says. Previously, one would have to spend large amounts of time and money to forge evidence. Now, evidence can be doctored quickly and easily, and even fabricated entirely from scratch.

The stakes are incredibly high, says Grossman. “Somebody can go to jail or not go to jail depending on whether something is a real or fake video.”

It’s not always financially feasible to bring in an expert who can evaluate the provenance of AI-generated content or evidence. The team proposes to develop a free, open-source framework that anyone within the court system can use to identify potentially problematic content. 

“We need a [transparent] tool that knows when it’s not sure about its output. One that is user friendly for this very unique group of users including both self-represented litigants and officers in the court system,” adds Grossman. 

Their solution could have a huge impact on the efficiency and trustworthiness of a justice system that is facing a great amount of change in a short period of time. “Even if our solution isn’t perfect, even if it gets 50, 60 or 70 percent of the way to be able to rule out [synthetic content], then we’ve really come a long way for the court system.”

Mitigating Dialect Bias

Solution Network Members

  • Laleh Seyyed-Kalantari, Solution Network Co-director (York University, Vector Institute)
  • Blessing Ogbuokiri, Solution Network Co-director (Brock University)
  • Wenhu Chen, Solution Network Member (University of Waterloo, Canada CIFAR AI Chair, Vector Institute) 
  • Collins Nnalue Udanor, Solution Network Member (University of Nigeria)
  • Thomas-Michael Emeka Chukwumezie, Solution Network Member (University of Nigeria)
  • Deborah Damilola Adeyemo, Solution Network Member (University of Ibadan)

The use of LLMs like ChatGPT has exploded in recent years, but for speakers of non-standard English, these tools are not as safe or effective as they are for others. This is the problem Laleh Seyyed-Kalantari and Blessing Ogbuokiri are working to address. 

Their Solution Network focuses on Nigerian Pidgin English, a language spoken by over 140 million people, primarily in West Africa. LLMs trained on standard English often misinterpret marginalized dialects like Pidgin as toxic or offensive and penalize the user. This can lead to very real harms like censorship on social media and discrimination in service-delivery systems. 

The team will work to create the first ever bias and safety benchmarks for Pidgin English as part of an open-source audit and mitigation toolkit. These resources will be available for developers and policymakers to use to ensure AI systems are fair and safe for all users. “We are trying to create an AI system where marginalized voices can feel comfortable using these tools because it will accommodate them,” adds Ogbuokiri. 

The team will work with a citizen network in Nigeria, who will help to evaluate the data sets and LLMs used in the project. “I think what makes our solution unique is that it is locally rooted and culturally representative of citizens of African countries,” explains Seyyed-Kalantari. 

The team also has a policymaking objective, adds Seyyed-Kalantari. “We want to ensure that the research that we are developing […] brings actual positive changes for people who are using these LLMs in Africa.”

Ogbuokiri notes the impact this project could have beyond West Africa for immigrant and Indigenous communities in Canada who also use non-standard English varieties. “This will serve as a vital public resource for researchers, developers and policymakers,” he states. “This project will contribute to locally-grounded and culturally-relevant AI systems that reflect the realities of the Global South.”


About the CAISI Research Program at CIFAR

The CAISI Research Program at CIFAR is a component of the Canadian AI Safety Institute, launched by Innovation, Science and Economic Development Canada. The CAISI Research Program at CIFAR is the scientific engine of a broad national effort that aims to promote the safe and responsible development and deployment of AI. The research program is independently leading Canadian, multidisciplinary research to find solutions to complex AI safety challenges and develop practical tools for responsible AI so that AI is safe for all Canadians.

  • Follow Us

Related Articles

  • Building safer AI with advanced evaluation methods
    December 03, 2025
  • Calls open for global AI alignment research initiative
    August 05, 2025
  • CIFAR announces first AI safety Catalyst Grants under new national program
    June 04, 2025
  • The value of community engagement in AI deployment
    March 31, 2025

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more