University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Federated Learning

Federated Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact jg801.

The goal of federated learning is to perform distributed training without centralising data. The training dataset is split across multiple devices or clients, potentially in a non-IID way. The key motivating factors are security and privacy of personal data. Use-cases include training on data from customers’ mobile phones, and collaboratively training a healthcare model on patient data from hospitals. Communication is usually the limiting factor, and the field has grown recently with many communication-efficient algorithms proposed. We will start by considering federated learning’s cryptographic origins. We will then explicitly consider its core challenges. We will build up from the simplest proposed SGD algorithms to more recent Bayesian algorithms. We will also briefly consider techniques that enhance security (like Secure Multi-Party Computation) and privacy (like Differential Privacy), which are of crucial importance in federated learning.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity