Serena Booth
Appointment
CIFAR Azrieli Global Scholar 2025-2027
Innovation, Equity, & The Future of Prosperity
Connect
About
I study the design of safe and trustworthy AI (and sometimes robot) systems, focusing on how humans specify what AI should do and on how humans assess how AI makes decisions. For example, I study the best interpretations of different types of specifications people might use: whether mathematical instructions, preferences over AI system outputs, corrections to AI system behaviors, or other forms. I then support the human in understanding what AI has learned from their given specifications. AI is a powerful technology, so I have also worked to legislate and regulate AI through my role as an AI Policy Advisor in the U.S. Senate.
Awards
- AI Policy Fellow, American Association for the Advancement of Science, 2023
- Rising Star in EECS, University of Texas at Austin, 2022
- Graduate Research Fellowship, U.S. National Science Foundation, 2018
- MIT Presidential Fellowship, 2018
Relevant Publications
- Booth, S., Knox, W. B., Shah, J., Niekum, S., Stone, P., & Allievi, A. (2023, June). The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 5, pp. 5920-5929).
- Knox, W. B., Hatgis-Kessell, S., Booth, S., Niekum, S., Stone, P., & Allievi, A. (2022). Models of human preference for learning reward functions. Transactions on Machine Learning Research.
- Booth, S., Sharma, S., Chung, S., Shah, J., & Glassman, E. L. (2022, March). Revisiting human-robot teaching and learning through the lens of human concept learning. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 147-156). IEEE.