University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > When Vision Transformers Meet Cooperative Perception

When Vision Transformers Meet Cooperative Perception

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

Join us on Zoom

Autonomous driving perception systems are faced with significant challenges, such as occlusion and sparse sensor observations at a distance. Cooperative perception, which utilizes V2X communication to enable autonomous vehicles to share visual information with each other, presents a promising solution to these challenges. In this seminar, we will explore the use of vision transformers in cooperative perception. Runsheng Xu will present his recent research on the topic, including two papers: “V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer” (ECCV2022) and “CoBEVT: Cooperative Bird’s Eye View Semantic Segmentation with Sparse Transformers” (CoRL2022). These papers demonstrate the potential of vision transformers in solving domain-specific challenges in cooperative perception. Join us to gain a deeper understanding of the current state of the art and future directions in autonomous driving perception.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity