BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Double Feature: Optimal Precoding for MIMO and Divergence Estimati
 on for Continuous Distributions - Dr Fernando Perez-Cruz (Princeton)
DTSTART:20080716T130000Z
DTEND:20080716T140000Z
UID:TALK12799@talks.cam.ac.uk
CONTACT:Zoubin Ghahramani
DESCRIPTION:Optimal Linear Precoding for Multiple-Input Multiple-Output Ga
 ussian Channels with Arbitrary Inputs\n\nWe investigate the linear precodi
 ng policy that maximizes the mutual information for general multiple-input
  multiple-output (MIMO) Gaussian channels with arbitrary input distributio
 ns\, by capitalizing on the relationship between mutual information and mi
 nimum mean-square error. The optimal linear precoder can be computed by me
 ans of a fixed-point equation as a function of the channel and the input c
 onstellation. We show that diagonalizing the channel matrix does not maxim
 ize the information transmission rate for nonGaussian inputs. A non-diagon
 al precoding matrix in general increases the information transmission rate
 \, even for parallel non-interacting channels. Finally\, we also investiga
 te the use of correlated input distributions\, which further increase the 
 transmission rate for low and medium-snr ranges.\n\n\nEstimation of Inform
 ation Theoretic Measures for Continuous Random Variables\n\nWe analyze the
  estimation of information theoretic measures of continuous random variabl
 es such as: differential entropy\, mutual information or Kullback-Leibler 
 divergence. The objective of this paper is two-fold. First\, we prove that
  the information theoretic measure estimates using the k-nearest-neighbor 
 density estimation with fixed k converge almost surely\, even though the k
 -nearest-neighbor density estimation with fixed k does not converge to its
  true measure. Second\, we show that the information theoretic measure est
 imates do not converge for k growing linearly with the number of samples. 
 Nevertheless\, these nonconvergent estimates can be used for solving the t
 wo-sample problem and assessing if two random variables are independent. W
 e show that the two-sample and independence tests based on these nonconver
 gent estimates compare favorably with the maximum mean discrepancy test an
 d the Hilbert Schmidt independence criterion\, respectively.\n
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
