University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Deep learning for time series

Deep learning for time series

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yingzhen Li.

Abstract: Neural networks have achieved state-of-the-art results in pattern recognition tasks. Additionally, recurrent neural networks (RNNs) have achieved remarkable results in tasks such as text translation, speech recognition, and image caption generation. In this talk, we will motivate using RNNs for temporal and sequential data, introduce RNNs and the problem of vanishing and exploding gradients, and discuss ways to address these problems, specifically by using long-short term memory (LSTM) cells and gated recurrent units (GRUs). In the second half of the talk, we will describe generative RNNs for sequence modeling and outline latent-variable RNNs for generating high-dimensional and temporally correlated music data.

  • Lipton, “A Critical Review of Recurrent Neural Networks for Sequence Learning.”
  • Greff et al., “LSTM: A Search Space Odyssey.”
  • Graves, “Generating Sequences With Recurrent Neural Networks.”
  • Boulanger-Lewandowski, Bengio, and Vincent, “Modeling Temporal Dependencies in High-Dimensional Sequences.”

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity