BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Soft Constraints and Uncertainty Representation as a Principle for
  Intelligent Systems - Andrew Wilson (Courant Institute of Mathematical Sc
 iences)
DTSTART:20250619T093000Z
DTEND:20250619T103000Z
UID:TALK233524@talks.cam.ac.uk
DESCRIPTION:Deep neural networks are often seen as different from other mo
 del classes by defying conventional notions of generalization. Popular exa
 mples of anomalous generalization behaviour include benign overfitting\, d
 ouble descent\, and the success of overparametrization. We argue that thes
 e phenomena are not distinct to neural networks\, or particularly mysterio
 us. Moreover\, this generalization behaviour can be intuitively understood
 \, and rigorously characterized\, using long-standing generalization frame
 works such as PAC-Bayes and countable hypothesis bounds. We present soft i
 nductive biases\, and uncertainty representation\, as a key unifying princ
 iple in explaining these phenomena: rather than restricting the hypothesis
  space to avoid overfitting\, embrace a flexible hypothesis space\, with a
  soft preference for simpler solutions that are consistent with the data. 
 This principle can be encoded in many model classes\, and thus deep learni
 ng is not as mysterious or different from other model classes as it might 
 seem. However\, we also highlight how deep learning is relatively distinct
  in other ways\, such as its ability for representation learning\, phenome
 na such as mode connectivity\, and its relative universality.
LOCATION:Seminar Room 2\, Newton Institute
END:VEVENT
END:VCALENDAR
