University of Cambridge > Talks.cam > Machine Intelligence Laboratory Speech Seminars > Bayesian Learning Approaches for Speech Recognition

Bayesian Learning Approaches for Speech Recognition

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Marcus Tomalin.

In this talk, I will present my previous and ongoing studies on Bayesian learning for speech recognition. In the areas of speech recognition, Bayesian adaptation has been widely presented to deal with the issue of speaker adaptation where the likelihood function of adapation data and the prior density of the existing model are merged to find the adapted model for new speaker. Such a Bayesian learning approach is not only useful for model adaptation but also for model regularization where the regularized hidden Markov models (HMMs) are good for prediction of unknown test data. The regularized HMMs can be applied for decision tree state tying in a data generation model and even can be integrated with a large magin classifier to improve the generalization of a discriminative model based on the large margin HMMs. Furthermore, the Bayesian learning is beneficial for topic- based language model under the paradigm of latent Dirichlet allocation (Blei et al., 2003). A Bayesian topic-based language model shall be presented for speech recognition. This regularized language model is established according to the marginal likelihood over the uncertainties of latent topics and topic mixtures. The topic information is extracted from the n-gram events and directly applied for speech recognition. At last, I will summarize my viewpoints about the studies of Bayesian learning and address the other challenging topics of machine learning methods for speech recognition.

This talk is part of the Machine Intelligence Laboratory Speech Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity