BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Information-based methods in dynamic learning - Wynn\, HP (London 
 School of Economics)
DTSTART:20110722T103000Z
DTEND:20110722T113000Z
UID:TALK32123@talks.cam.ac.uk
CONTACT:Mustapha Amrani
DESCRIPTION:The history of information/entropy in learning due to Blackwel
 l\, Renyi\, Lindley and others is sketched. Using results of de Groot\, wi
 th new proofs\, we arrive at a general class of information functions whic
 h gives "expected" learning in the Bayes sense. It is shown how this is in
 timately connected with the theory of majorization: learning means a more 
 peaked distribution in a majorization sense. Counter-examples show that in
  some real situations it is possible to un-learn in the sense of having a 
 less peaked posterior than prior. This does not happen in the standard Gau
 ssian case\, but does in cases such as the Beta-mixed binomial. Applicatio
 ns are made to experimental design. With designs for non-linear and dynami
 c system an idea of "local learning" is defined\, in which the above theor
 y is applied locally. Some connection with ideas of "active learning" in t
 he machine learning area is attempted.\n
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
