University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Part A: Non-parametric image optimization & Part B: Crowdsourcing gaze data collection

Part A: Non-parametric image optimization & Part B: Crowdsourcing gaze data collection

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Part A: Non-parametric image optimization Abstract: Patch-based methods have shown tremendous promise for solving a range of problems in image analysis, editing, and synthesis. Until recently these methods were too costly for practical use. In this talk I’ll describe the PatchMatch nearest-neighbor search algorithm, which accelerates these methods while using less memory than previous implementations. I will demonstrate commercial implementations of existing technology for hole filling in images and new approaches for image editing in Adobe Photoshop CS6 , made possible by our PatchMatch engine. I will also discuss ongoing research in non-rigid correspondence and multi-source synthesis techniques that employ the same underlying algorithms.

Part B: Crowdsourcing gaze data collection Abstract: Knowing where people look is a useful tool in many various image and video applications. However, traditional gaze tracking hardware is expensive and requires local study participants, so acquiring gaze location data from a large number of participants is very problematic. In this work we propose a crowdsourced method for acquisition of gaze direction data from a virtually unlimited number of participants, using a robust self-reporting mechanism. Our system collects temporally sparse but spatially dense points-of-attention in any visual information. We apply our approach to an existing video data set and demonstrate that we obtain results similar to traditional gaze tracking. We also explore the parameter ranges of our method, and collect gaze tracking data for a large set of YouTube videos.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity