
Marc G. Bellemare
About
Marc G. Bellemare’s research lies at the intersection of reinforcement learning and statistical prediction.
His work spans both theoretical and practical contributions, including a novel distributional treatment of reinforcement learning, a theory of exploration in high-dimensional state spaces, the development of the highly-successful Arcade Learning Environment for evaluating artificial agents, and work in deep reinforcement learning. His long-term goal is the design of generally competent agents: agents that can successfully operate in a wide range of environments and eventually exhibit the gamut of behaviour that we attribute to humans: curiosity, boredom, competence, and emergent communication, to name a few.
Relevant Publications
- Bellemare, M., J. Veness and M. Bowling. "The Arcade Learning Environment: An Evaluation Platform for General Agents." Journal of Artificial Intelligence Research (2013).
- Mnih, V. et al. "Human-level control through deep reinforcement learning." Nature (2015).
- Bellemare, M.*, W. Dabney* and R. Munos. "A distributional perspective on reinforcement learning." ICML, 2017.
- Bellemare, M., S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton and R. Munos. "Unifying count-based exploration and intrinsic motivation." Artificial Intelligence (2016).
Support Us
CIFAR is a registered charitable organization supported by the governments of Canada, Alberta, Ontario, and Quebec as well as foundations, individuals, corporations, and international partner organizations.