Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr
CIFAR Pan-Canadian AI Strategy

Creating smart computers using the human brain

By: Amii
3 Oct, 2019
October 3, 2019
post_content

Alona Fyshe is an Amii Fellow, assistant professor at the University of Alberta and a Canada CIFAR AI Chair, CIFAR’s prestigious program to retain, recruit and support top AI research talent in Canada.

How do computers learn about our world? Turns out, it may not be that different from how humans do.

Alona Fyshe is an Amii Fellow at the University of Alberta, cross-appointed between computing science and psychology. Her research focuses on using computers to understand how the brain processes language, and using that information to improve computer models.

As an undergrad, Fyshe was inspired by Digital Biology, a book by Peter J. Bentley which explores computational biology. This concept was a natural fit for Fyshe, who was taking genetics classes while working on her computing science degree. When she found out that a research group at UAlberta was actually doing computational biology, she entered into an 18-month internship with the group, starting her down the path to both AI and research.

Recently, her research has been focusing on how computer models of the world relate to the human brain. Using computer models of both language and vision, Fyshe explores how machines perceive the world around them, in contrast to how humans do the same.

“It’s surprising and intriguing that computer models of vision and those of language have anything to do with each other. They’re both completely different data sources, they’re different models, and they’re trained in different ways. It’s just such a different paradigm, but they learn the same thing,” explains Fyshe.

“It’s surprising and intriguing that computer models of vision and those of language have anything to do with each other.”

For example, both models can observe that cats and dogs are more similar than cats and cars, or that horses and cows are both found on farms. Both models are able to learn the patterns of things that are similar — through vision by looking at images, and also through language by reading texts from the Internet.

“The world is just all about relationships — patterns of what things are and how they relate to each other. You can learn about that by watching the world, and also by reading about the world. And people do that, of course, which is why it’s also related to how the brain represents meaning. So all three of these things are all interconnected and they all show patterns that are similar.”

She uses her research to then improve computer models of language and vision, allowing models to behave in a more human-like fashion.

“One of the things I’m working on now is improving computer vision models by including brain imaging data. You can imagine that if a computer model of vision knows how people represent meaning, it might be a better model for building — for example — a self-driving car. It might make mistakes that are more human than just a computer model trained by itself.”

Alona Fyshe completed her PhD at Carnegie Mellon University, where she was advised by Tom Mitchell. She completed her BSc and MSc at the University of Alberta.

This story was featured in the AICan Bulletin. Subscribe to the bi-monthly email publication to keep up to date on AI in Canada.

  • Follow Us

Related Articles

  • CIFAR funds seven high-risk, high-reward AI projects
    March 04, 2025
  • Three 2024 Nobel Laureates among CIFAR’s acclaimed community of researchers
    October 15, 2024
  • Canada CIFAR AI Chairs gather in Banff for annual AICan meeting
    June 20, 2024
  • Indigenous perspectives in AI
    June 18, 2024

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR footer logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

Fields marked with an * are required

Je préfère m’inscrire en français (cliquez ici).


Subscribe to our CIFAR newsletters: *

You can unsubscribe from these communications at any time. View our privacy policy.


As a subscriber you will also receive a digital copy of REACH, our annual magazine which highlights our researchers and their breakthroughs with long-form features, interviews and illustrations.


Please provide additional information if you would like to receive a print edition of REACH.


This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more

Notifications