Double Feature: Optimal Precoding for MIMO and Divergence Estimation for Continuous Distributions
- đ¤ Speaker: Dr Fernando Perez-Cruz (Princeton)
- đ Date & Time: Wednesday 16 July 2008, 14:00 - 15:00
- đ Venue: Engineering Department, CBL Room 438
Abstract
Optimal Linear Precoding for Multiple-Input Multiple-Output Gaussian Channels with Arbitrary Inputs
We investigate the linear precoding policy that maximizes the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean-square error. The optimal linear precoder can be computed by means of a fixed-point equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A non-diagonal precoding matrix in general increases the information transmission rate, even for parallel non-interacting channels. Finally, we also investigate the use of correlated input distributions, which further increase the transmission rate for low and medium-snr ranges.
Estimation of Information Theoretic Measures for Continuous Random Variables
We analyze the estimation of information theoretic measures of continuous random variables such as: differential entropy, mutual information or Kullback-Leibler divergence. The objective of this paper is two-fold. First, we prove that the information theoretic measure estimates using the k-nearest-neighbor density estimation with fixed k converge almost surely, even though the k-nearest-neighbor density estimation with fixed k does not converge to its true measure. Second, we show that the information theoretic measure estimates do not converge for k growing linearly with the number of samples. Nevertheless, these nonconvergent estimates can be used for solving the two-sample problem and assessing if two random variables are independent. We show that the two-sample and independence tests based on these nonconvergent estimates compare favorably with the maximum mean discrepancy test and the Hilbert Schmidt independence criterion, respectively.
Series This talk is part of the Machine Learning @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- Biology
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge Neuroscience Seminars
- Cambridge talks
- CBL important
- Chris Davis' list
- Creating transparent intact animal organs for high-resolution 3D deep-tissue imaging
- dh539
- dh539
- Engineering Department, CBL Room 438
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- Machine Learning @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- ob366-ai4er
- Required lists for MLG
- rp587
- Seminar
- Simon Baker's List
- Stem Cells & Regenerative Medicine
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 16 July 2008, 14:00-15:00