University of Cambridge > Talks.cam > Machine Learning @ CUED > Subspace Codes for Adversarial Error-Correction in Network Coding

Subspace Codes for Adversarial Error-Correction in Network Coding

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

short talk

In the context of error control in random linear network coding, it is useful to construct codes that comprise well-separated collections of subspaces of a vector space over a finite field.

This work concerns the construction of non-constant-dimension projective space codes for adversarial error-correction in random linear network coding. The metric used is the so-called injection distance introduced by Silva and Kschischang, which perfectly reflects the adversarial nature of the channel.

A Gilbert-Varshamov-type bound for such codes is derived and its asymptotic behavior is analyzed. It is shown that in the limit as the ambient space dimension approaches infinity, the Gilbert-Varshamov bound on the size of non-constant-dimension codes behaves similar to the Gilbert-Varshamov bound on the size of constant-dimension codes contained within the largest Grassmannians in the projective space.

Using a multi-level scheme, new non-constant-dimension codes are constructed; these codes contain more codewords than comparable codes designed for the subspace metric. To our knowledge this work is the first to address the construction of non-constant-dimension codes designed for the injection metric.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity