Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • Impact Clusters
    • News
    • CIFAR Strategy
    • Nurturing a Resilient Earth
    • AI Impact
    • Donor Impact
    • CIFAR 40
  • Events
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
    • CIFAR Arrell Future of Food Initiative
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Leadership
    • Staff Directory
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Equity, Diversity & Inclusion
    • Statement on Institutional Neutrality
    • Research Security
  • fr
  • Home
  • Bio

Follow Us

Serena Booth_black&white

Serena Booth

Appointment

CIFAR Azrieli Global Scholar 2025-2027

Innovation, Equity, & The Future of Prosperity

Connect

Webpage

About

I study the design of safe and trustworthy AI (and sometimes robot) systems, focusing on how humans specify what AI should do and on how humans assess how AI makes decisions. For example, I study the best interpretations of different types of specifications people might use: whether mathematical instructions, preferences over AI system outputs, corrections to AI system behaviors, or other forms. I then support the human in understanding what AI has learned from their given specifications. AI is a powerful technology, so I have also worked to legislate and regulate AI through my role as an AI Policy Advisor in the U.S. Senate.

Awards

  • AI Policy Fellow, American Association for the Advancement of Science, 2023
  • Rising Star in EECS, University of Texas at Austin, 2022
  • Graduate Research Fellowship, U.S. National Science Foundation, 2018
  • MIT Presidential Fellowship, 2018

Relevant Publications

  • Booth, S., Knox, W. B., Shah, J., Niekum, S., Stone, P., & Allievi, A. (2023, June). The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 5, pp. 5920-5929).
  • Knox, W. B., Hatgis-Kessell, S., Booth, S., Niekum, S., Stone, P., & Allievi, A. (2022). Models of human preference for learning reward functions. Transactions on Machine Learning Research.
  • Booth, S., Sharma, S., Chung, S., Shah, J., & Glassman, E. L. (2022, March). Revisiting human-robot teaching and learning through the lens of human concept learning. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 147-156). IEEE.

Institution

Brown University

Department

Department of Computer Science

Education

  • PhD (Computer Science), Massachusetts Institute of Technology
  • MS (Computer Science), Massachusetts Institute of Technology
  • BA (Computer Science), Harvard University

Country

United States

Support Us

The Canadian Institute for Advanced Research (CIFAR) is a globally influential research organization proudly based in Canada. We mobilize the world’s most brilliant people across disciplines and at all career stages to advance transformative knowledge and solve humanity’s biggest problems, together. We are supported by the governments of Canada, Alberta and Québec, as well as Canadian and international foundations, individuals, corporations and partner organizations.

Donate Now
CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2025 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more