University of Cambridge > Talks.cam > Audio and Music Processing (AMP) Reading Group > Monaural Acoustical Scene Analysis through Harmonic-Temporal Clustering of the Power Spectrum

Monaural Acoustical Scene Analysis through Harmonic-Temporal Clustering of the Power Spectrum

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Taylan Cemgil.

The design of effective algorithms for single-channel analysis of complex and varied acoustical scenes is a very important and challenging problem. We present here a framework called Harmonic-Temporal Clustering (HTC) which relies on the description of the power spectrum as a combination of constrained Gaussian Mixture Models. The parameters of the models are simultaneously estimated through a global fitting to the observed power spectrum in the time-frequency domain. The optimal solution can be used both for F0 estimation in various noisy environments as well as in multiple speaker situations, and to perform single channel speech enhancement, background retrieval and speaker separation.

Jonathan will also introduce the research topics at the Sagayama/Ono Lab in Tokyo University.

In many problems occuring in musical and acoustical signal processing, such as source separation/localization, noise cancellation, multi-pitch analysis, harmonic analysis, rhythm/tempo analysis, etc., there are many cases where ambiguity lies and for which the solution cannot be uniquely determined from the observation. In this presentation, I will present some work done at the Sagayama/Ono lab on such problems, mainly with a probabilistic approach based on stochastic models.

This talk is part of the Audio and Music Processing (AMP) Reading Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity