About
Colin Raffel is an associate professor in computer science at the University of Toronto, an associate research director at the Vector Institute, and a faculty researcher at Hugging Face. His lab does research on decentralizing, democratizing, and de-risking large-scale AI.
Awards
- NeurIPS Outstanding Paper (Runner-Up), 2023
- Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies (Runner-Up), 2023
- Best Paper Honorable Mention, NeurIPS Workshop on Broadening Research Collaborations in ML, 2022
- NSF CAREER Award, 2022
- Google Research Award, 2021
Relevant Publications
- Muqeeth, M., Liu, H., Liu, Y., & Raffel, C. (2024). Learning to route among specialized experts for zero-shot generalization. Proceedings of the 41st International Conference on Machine Learning (ICML).
- Yadav, P., Tam, D., Choshen, L., Raffel, C., & Bansal, M. (2023). TIES-Merging: Resolving interference when merging models. Proceedings of the 37th Neural Information Processing Systems (NeurIPS).
- Kandpal, N., Lester, B., Muqeeth, M., Mascarenhas, A., Evans, M., Baskaran, V., Huang, T., Liu, H., & Raffel, C. (2023). Git-Theta: A Git extension for collaborative development of machine learning models. Proceedings of the 40th International Conference on Machine Learning (ICML).
- Kandpal, N., Deng, H., Roberts, A., Wallace, E., & Raffel, C. (2023). Large language models struggle to learn long-tail knowledge. Proceedings of the 40th International Conference on Machine Learning (ICML).
- Raffel, C. (2023). Building machine learning models like open-source software. Communications of the Association for Computing Machinery (CACM).
- Liu, H., Tam, D., Muqeeth, M., Mohta, J., Huang, T., Bansal, M., & Raffel, C. (2022). Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Proceedings of the 36th Neural Information Processing Systems (NeurIPS).