University of Cambridge > Talks.cam > CMIH Hub seminar series > Efficient priors for self-supervised learning: application and theories

Efficient priors for self-supervised learning: application and theories

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yuan Huang.

Remarkable progress of self-supervised learning has been taking place in the past two years across various domains. The goal of SSL method is to learn useful semantic features without human annotations. In absence of human defined labels, we expect the deep network to learn richer feature structure explained by the data itself instead of being constrained by human knowledge. Nevertheless, self-supervised learning still hinges on strong prior knowledge or human-defined pretext task to effectively pretrain the network. These prior knowledges can impose some certain form of consistency between different views of image, or be based on some pre-defined pretext task such as rotation prediction. This talk will cover our recent progress and new findings in terms of constructing useful priors for self-supervised learning (respectively published in T-PAMI and NeurIPS 2021), both from perspective of theories and practical applications. We will also introduce the SOTA mainstream self-supervised learning frameworks and the useful pretexts widely used in this field.

Join Zoom Link:

https://maths-cam-ac-uk.zoom.us/j/93331132587?pwd=MlpReFY3MVpyVThlSi85TmUzdTJxdz09 Meeting ID: 933 3113 2587 Passcode: 144696

This talk is part of the CMIH Hub seminar series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity