University of Cambridge > > Isaac Newton Institute Seminar Series > Wasserstein Embeddings in the Deep Learning Era

Wasserstein Embeddings in the Deep Learning Era

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

MDL - Mathematics of deep learning

Computational optimal transport has found many applications in machine learning and, more specifically, deep learning as a fundamental tool to manipulate and compare probability distributions. The Wasserstein distances arising from the optimal transport problem have been of particular interest in recent years. However, a consistent roadblock against the more prevalent use of transport-based methods has been their computational cost. Besides the more well-known ideas for faster computational approaches, including entropy regularization, several fundamental concepts have emerged that enable the integration of transport-based methods as part of the computational graph of a deep neural network. Sliced-Wasserstein distances and the Linear Optimal Transport (LOT) framework are among fundamental concepts well suited for integration into today’s deep neural networks. In this talk, we will present the idea of Linear Optimal Transport (otherwise known as the Wasserstein Embedding) and its extension to Sliced-Wasserstein Embeddings and demonstrate their various applications in deep learning with a particular interest in learning from graphs and set-structured data. The talk will be an overview of our recent ICLR 2021 and NeurIPS 2021 publications.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity