University of Cambridge > Talks.cam > CUED Speech Group Seminars > Will recurrent neural network language models scale?

Will recurrent neural network language models scale?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rogier van Dalen.

In “Up from trigrams! The struggle for improved language models” Fred Jelinek described the first use of trigrams in 1976 and then lamented “The surprising fact is that now, a full 15 years later, after all the solid progress in speech recognition, the trigram model remains fundamental”. Almost two decades after this paper the situation was largely unchanged but in 2010 Tomas Mikolov presented the “Recurrent neural network based language model” (RNN LM). After many decades we now have a new means for language modelling which is clearly much better than the n-gram. Having actively pioneered the use of RNNs in the 80’s and 90’s the concern arises as to whether the RNNs will continue to outperform or whether there will be another “neural net winter”. This talk address the problem of whether RNN L Ms will scale by looking at the scaling properties of n-grams, and then doing the same for RNN L Ms. Scaling is considered in terms of LM words, number of parameters, processing power and memory. Preliminary results will be presented showing the largest reductions in perplexity reported so far, an analysis of the performance on frequent and rate words, results on the newly released 1-billion-word-language-modelling-benchmark and the impact on word error rates in a a commercial LVCSR system. The talk concludes by justifying whether RNN L Ms will scale with respect to the previously incumbent n-grams.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity