Yuntian Deng
Appointment
Solution Network Member
Canadian AI Safety Institute Research Program
Safeguarding Courts from Synthetic AI Content
About
Yuntian Deng’s research focuses on natural language processing. He develops learning-based methods to better understand and improve human-AI interaction. His early work on residual energy-based models analyzed the distinction between human- and machine-written language and used this understanding to improve the quality of AI-generated text. He leads WildChat, which has collected millions of real human-AI conversations and has been used by organizations such as OpenAI and Anthropic and featured by The Washington Post. His work on Implicit Chain-of-Thought, featured by TechCrunch, investigates the reasoning limitations of current language models and introduces a new research direction, implicit reasoning, that enables models to reason internally without relying on explicit, think-aloud-style reasoning. He also explores new human-AI interaction paradigms, such as Interactive Training, where humans guide model optimization through live feedback, much like teachers adapting to students in real time, and NeuralOS, which reimagines user-AI interfaces as dynamically generated rather than rigidly preprogrammed.
Awards
- ACM Gordon Bell Prize, Association for Computing Machinery (2022)
- Rising Stars in Data Science, University of Chicago (2022)
- NVIDIA Fellowship, NVIDIA (2021)
- Baidu Fellowship, Baidu (2019)
- ACL Best Demo Paper Award Runner-Up, Association for Computational Linguistics (2017)
Relevant Publications
- Zhang, W., Lu, Y. Y., & Deng, Y. (2025). “Interactive Training: Feedback-Driven Neural Network Optimization.” EMNLP 2025 Demo.
- Rivard, L., Sun, S., Guo, H., Chen, W., & Deng, Y. (2025). “NeuralOS: Towards Simulating Operating Systems via Neural Generative Models.” arXiv preprint arXiv: 2507.08800.
- Zhao, W., Ren, X., Hessel, J., Cardie, C., Choi, Y., & Deng, Y. (2024). “WildChat: 1M ChatGPT Interaction Logs in the Wild.” ICLR 2024.
- Deng, Y., Choi, Y., & Shieber, S. (2024). “From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step.” arXiv preprint arXiv:2405.14838.
- Deng, Y., Bakhtin, A., Ott, M., Szlam, A., & Ranzato, M. (2020). “Residual Energy-Based Models for Text Generation.” ICLR 2020.