Efficient priors for self-supervised learning: application and theories
- đ¤ Speaker: Yu Wang, JD AI Research
- đ Date & Time: Wednesday 05 October 2022, 13:00 - 14:00
- đ Venue: Virtual (see abstract for Zoom link)
Abstract
Remarkable progress of self-supervised learning has been taking place in the past two years across various domains. The goal of SSL method is to learn useful semantic features without human annotations. In absence of human defined labels, we expect the deep network to learn richer feature structure explained by the data itself instead of being constrained by human knowledge. Nevertheless, self-supervised learning still hinges on strong prior knowledge or human-defined pretext task to effectively pretrain the network. These prior knowledges can impose some certain form of consistency between different views of image, or be based on some pre-defined pretext task such as rotation prediction. This talk will cover our recent progress and new findings in terms of constructing useful priors for self-supervised learning (respectively published in T-PAMI and NeurIPS 2021), both from perspective of theories and practical applications. We will also introduce the SOTA mainstream self-supervised learning frameworks and the useful pretexts widely used in this field.
Join Zoom Link:
https://maths-cam-ac-uk.zoom.us/j/93331132587?pwd=MlpReFY3MVpyVThlSi85TmUzdTJxdz09 Meeting ID: 933 3113 2587 Passcode: 144696
Series This talk is part of the CMIH Hub seminar series series.
Included in Lists
- All CMS events
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Chris Davis' list
- CMIH Hub seminar series
- Interested Talks
- ndk22's list
- ob366-ai4er
- rp587
- Trust & Technology Initiative - interesting events
- Virtual (see abstract for Zoom link)
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 05 October 2022, 13:00-14:00