Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • News
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
  • Support Us
  • About
    • Our Story
    • CIFAR 40
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Staff Directory
    • Equity, Diversity & Inclusion
  • fr
  • Home
  • Bio

Follow Us

Aishwarya Agrawal

Aishwarya Agrawal

Appointment

Canada CIFAR AI Chair

Pan-Canadian AI Strategy

Connect

Université de Montréal

Google Scholar

About

Aishwarya Agrawal is a Canada CIFAR AI Chair and an assistant professor in the Department of Computer Science and Operations Research (DIRO) at Université Montréal. She also works as a research scientist at DeepMind’s Montréal office. 

Her research interests lie at the intersection of computer vision, deep learning and natural language processing, with a focus on developing artificial intelligence (AI) systems that can ‘see’ (i.e. understand the contents of an image: who, what, where, doing what?) and ‘talk’ (ie communicate the understanding to humans in free-form natural language).

Awards

  • NVIDIA Graduate Fellowship, 2018
  • Rising Star in EECS, 2018
  • Best Poster Award, Object Understanding for Interaction Workshop, International Conference on Computer Vision, 2015

Relevant Publications

  • Agrawal, A., Batra, D., Parikh, D., & Kembhavi, A. (2018). Don’t just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4971-4980).

  • Ramakrishnan, S., Agrawal, A., & Lee, S. (2018). Overcoming language priors in visual question answering with adversarial regularization.

  • Agrawal, A., Kembhavi, A., Batra, D., & Parikh, D. (2017). C-vqa: A compositional split of the visual question answering (vqa) v1. 0 dataset.

  • Agrawal, A., Batra, D., & Parikh, D. (2016). Analyzing the behavior of visual question answering models.

  • Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., & Parikh, D. (2015). Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision (pp. 2425-2433).

Institution

DeepMind

Mila

Université de Montréal

Department

Computer Science and Operations Research (DIRO)

Education

  • PhD, Georgia Tech

Country

Canada

Support Us

CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2023 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more