About
Pieter Abbeel works in machine learning and robotics.
In particular, his research focuses on apprenticeship learning (making robots learn from people), reinforcement learning (how to make robots learn through their own trial and error), and how to speed up skill acquisition through learning-to-learn. Abbeel’s robots have learned advanced helicopter aerobatics, knot-tying and basic assembly, and can organize laundry. His research group has pioneered deep reinforcement learning for robotics, including the learning of visuomotor skills and simulated locomotion.
Awards
- Best Paper Award Winner or Finalist, NIPS, ICRA (four times) and ICML
- Presidential Early Career Award for Scientists and Engineers (PECASE)
- IEEE Robotics and Automation Society Early Career Award
- Young Investigator Program Awards: ONR, AFOSR, Darpa, NSF
- Dick Volz Best U.S. PhD Thesis in Robotics and Automation Award
Relevant Publications
Duan, Y. et al. “One-shot Imitation Learning.” Paper presented at Neural Information Processing Systems (NIPS), 2017.
Tobin, J. et al. “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.” In the Proceedings of the 30th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, October 2017.
Finn, C., P. Abbeel, and S. Levine. “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.” In the Proceedings of the International Conference on Machine Learning (ICML), Sydney, August 2017.
Chen, X. et al. “InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” Paper presented at Neural Information Processing Systems (NIPS), 2016.
Schulman, J. et al. “Trust Region Policy Optimization.” In the Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.