BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Multi-agent learning: Implicit regularization and order-optimal go
 ssip - Patrick Rebeschini (University of Oxford)
DTSTART:20180614T090000Z
DTEND:20180614T100000Z
UID:TALK107269@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:In distributed machine learning\, data are stored and processe
 d in multiple locations by different agents. Each agent is represented by 
 a node in a graph\, and communication is allowed between neighbours. In th
 e decentralised setting typical of peer-to-peer networks\, there is no cen
 tral authority that can aggregate information from all the nodes. A typica
 l setting involves agents cooperating with their peers to learn models tha
 t can perform better on new\, unseen data.&nbsp\;In this talk\, we present
  the first results on the generalisation capabilities of distributed stoch
 astic gradient descent methods. Using algorithmic stability\, we derive up
 per bounds for the test error and provide a principled approach for implic
 it regularization\, tuning the learning rate and the stopping time as a fu
 nction of the graph topology. We also present a new Gossip protocol for th
 e aggregation step in distributed methods that can yield order-optimal com
 munication complexity. Based on non-reversible Markov chains\, our protoco
 l is local and does not require global routing\, hence improving existing 
 methods. (Joint work with Dominic Richards)  <br><br><br><br><br>
LOCATION:Seminar Room 2\, Newton Institute
END:VEVENT
END:VCALENDAR
