BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Quantum Expectation Maximization and algorithms for learning repre
 sentations - Alessandro Luongo (IRIF\, Paris)
DTSTART:20191121T120000Z
DTEND:20191121T130000Z
UID:TALK135352@talks.cam.ac.uk
CONTACT:Sathyawageeswar Subramanian
DESCRIPTION:The Expectation-Maximization (EM) algorithm is a fundamentalto
 ol in unsupervised machine learning. It is often used as an efficientway t
 o solve Maximum Likelihood (ML) and Maximum A Posteriori (MAP)estimation p
 roblems\, especially for models with latent variables. It isalso the algor
 ithm of choice to fit mixture models.  In this talk wedefine and use a qu
 antum version of EM to fit a Gaussian MixtureMode (GMM). We start by intr
 oducing in great detail all the tools usedin quantum machine learning (QRA
 M\, distance estimation procedures\,quantum linear algebra\, etc.). Then w
 e present q-means: a quantumalgorithm for k-means. We generalize q-means a
 lgorithm to fit a GMM. Ouralgorithms are only polylogarithmic in the numbe
 r of elements in thetraining set\, but are polynomial in other parameters 
 - as the dimensionof the feature space and the number of components in the
  mixture. We'lldiscuss some experiments concerning the runtime of these al
 gorithms onreal datasets. We conclude by analyzing prospect relations betw
 eenquantum iterative algorithms and the Information Bottleneck Method.
LOCATION:MR15\, Centre for Mathematical Sciences\, Wilberforce Road\, Camb
 ridge
END:VEVENT
END:VCALENDAR
