Drawing Connections in Decentralized Deep Learning
- đ¤ Speaker: Max Ryabinin
- đ Date & Time: Wednesday 18 June 2025, 15:00 - 16:00
- đ Venue: Computer Lab, LT1
Abstract
Recently, the field of Machine Learning has seen renewed interest in communication-efficient training over slow, unreliable, and heterogeneous networks. While the latest results and their applications to LLMs are highly promising, their underlying ideas have surprisingly many connections to well-established approaches to distributed ML. In this talk, I will provide an overview of recent developments in decentralized training within the broader context of areas such as volunteer computing, communication-efficient optimization, and federated learning. In addition, I will present our research in this field, ranging from Learning@home/DMoE to Petals, and share some lessons learned about ML research in general during the development of these methods.
Max Ryabinin is VP of Research & Development at Together AI, working on large-scale deep learning. Previously, he was a Senior Research Scientist at Yandex, studying a wide range of topics in natural language processing and efficient machine learning. During his PhD, he developed methods for distributed training and inference over slow and unstable networks, such as DeDLOC, SWARM Parallelism, and Petals. He is also the creator and maintainer of Hivemind, a highly popular open-source framework for decentralized training in PyTorch.
Series This talk is part of the Cambridge ML Systems Seminar Series series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Computer Lab, LT1
- Department of Computer Science and Technology talks and seminars
- Interested Talks
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Max Ryabinin
Wednesday 18 June 2025, 15:00-16:00