University of Cambridge > Talks.cam > NLIP Seminar Series > Multiple Instance Learning for Natural Language Tasks

Multiple Instance Learning for Natural Language Tasks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact NLIP Seminars.

Many state-of-the-art methods for text and natural-language processing employ supervised learning algorithms. A key obstacle to the application of supervised learning methods, however, is that labeled training instances are usually expensive to acquire. One way around this obstacle, I argue, is to exploit data that can be readily and inexpensively labeled at a coarse level of granularity. Such situations are well suited to multiple-instance learning. In multiple-instance learning, individual instances are not given labels, but instead bags of instances are labeled. Whereas a negative bag is assumed to contain only negative instances, a positive bag need contain only one positive instance.

I will describe the multiple-instance setting, discuss its applicability to natural language tasks, and present several recent results on (i) an empirical comparison of multiple-instance learning to ordinary supervised learning, (ii) a method for learning to combine predictions in a multiple-instance setting, and (iii) active multiple-instance learning.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity