BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Deep learning for time series - Christof Angermueller(University o
 f Cambridge)\; David Zoltowski (University of Cambridge)
DTSTART:20160121T143000Z
DTEND:20160121T160000Z
UID:TALK62021@talks.cam.ac.uk
CONTACT:Yingzhen Li
DESCRIPTION:Abstract:\nNeural networks have achieved state-of-the-art resu
 lts in pattern recognition tasks. Additionally\, recurrent neural networks
  (RNNs) have achieved remarkable results in tasks such as text translation
 \, speech recognition\, and image caption generation. In this talk\, we wi
 ll motivate using RNNs for temporal and sequential data\, introduce RNNs a
 nd the problem of vanishing and exploding gradients\, and discuss ways to 
 address these problems\, specifically by using long-short term memory (LST
 M) cells and gated recurrent units (GRUs). In the second half of the talk\
 , we will describe generative RNNs for sequence modeling and outline laten
 t-variable RNNs for generating high-dimensional and temporally correlated 
 music data.\n\n* Lipton\, "A Critical Review of Recurrent Neural Networks 
 for Sequence Learning."\n\n* Greff et al.\, "LSTM: A Search Space Odyssey.
 "\n\n* Graves\, "Generating Sequences With Recurrent Neural Networks."\n\n
 * Boulanger-Lewandowski\, Bengio\, and Vincent\, "Modeling Temporal Depend
 encies in High-Dimensional Sequences."
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
