Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • About
    • Our Story
    • Awards
    • Partnerships
    • President’s Message
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
  • News
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Staff Directory
    • Leadership
  • Programs
    • Research Programs
    • Knowledge Mobilization
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • Global Call for Ideas
    • Action on Covid-19
  • Events
    • Public Events
    • Invitation-only Meetings
  • Support Us
  • fr
  • Home
  • Bio

Follow Us

sanja

Sanja Fidler

Appointment

  • Canada CIFAR AI Chair
  • Pan-Canadian AI Strategy

Connect

University of Toronto

Google Scholar

About

Sanja Fidler is a Canada CIFAR AI Chair at the Vector Institute, an associate professor in the department of mathematical and computational sciences at the University of Toronto, and the director of AI at NVIDIA.

Fidler’s work is in the area of computer vision. Her main research interests are 2D and 3D object detection, particularly scalable multi-class detection, object segmentation and image labeling, and (3D) scene understanding. Fidler is also interested in the interplay between language and vision: generating sentential descriptions about complex scenes, as well as using textual descriptions for better scene parsing (e.g., in the scenario of the human-robot interaction).

Awards

  • Best Paper Honorable Mention, CVPR, 2017
  • Amazon Academic Research Award, 2017
  • NVIDIA Pioneer of AI Award, 2016
  • Facebook Faculty Award, 2016
  • Outstanding Reviewer Award, ECCV (2008, 2012) and CVPR (2012, 2015)

Relevant Publications

  • Zhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., & Torralba, A. (2019). Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127(3), 302-321.

  • Damen, D., Doughty, H., Farinella, G. M., Fidler, S., Furnari, A., Kazakos, E., … & Wray, M. (2018). Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 720-736).

  • Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5122-5130).

  • Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Skip-thought vectors. In Advances in neural information processing systems (pp. 3294-3302).

  • Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision (pp. 19-27).

Institution

  • NVIDIA
  • University of Toronto
  • Vector Institute

Department

Mathematical and Computational Sciences

Education

  • PhD (Computer Science), University of Ljubljana

Country

  • Canada

Support Us

CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.

Donate Now
CIFAR header logo

Subscribe

Stay up to date on news & ideas from CIFAR.

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2022 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap
This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more