University of Cambridge > > Microsoft Research Cambridge, public talks > Modeling and Evaluating Information Retrieval Results

Modeling and Evaluating Information Retrieval Results

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

In this talk I will discuss two different pieces of work I have done in the field of Information Retrieval. In the first part of my talk I will focus on my work on modeling score distributions produced by information retrieval systems. Information retrieval systems assign scores to documents according to some definition of relevance to a user’s request and return documents in a descending order of their scores. Given this ranked list of documents and their corresponding scores, inferring the score distributions of relevant and non-relevant documents is an essential task for numerous information retrieval applications, such as information filtering, topic detection, meta-search, and distributed IR. Modeling score distributions in an accurate manner is often the basis of such inferences. In this part of my talk I will revisit the choice of distributions used to model documents’ scores. First, I will discuss some assumptions and intuitions behind modeling score distributions. Then I will present a better model for score distributions directly dictated by the data, using a richer class of density functions than the ones dominating the literature and applying Variational Bayes to automatically trade-off the goodness-of-fit and the model complexity.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2022, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity