University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Bayesian Neural Networks

Bayesian Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Allingham.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders.

Bayesian Neural Networks (BNNs) take a probabilistic approach to learning in neural networks by placing distributions over the weights and performing (approximate) Bayesian inference. In this talk, we will introduce the basics of BNNs, the challenges in training them, and some of their properties. We will then discuss the Laplace approximation as a highly performant approximate inference scheme for BNNs with connections to linear models and neural tangent kernels. Finally, we will provide a tour of prior choices in BNNs, looking at both weight and function space approaches.

Recommended reading:

MacKay, D. J. (1992). Bayesian interpolation. Neural computation, 4(3), 415-447. https://authors.library.caltech.edu/13792/1/MACnc92a.pdf

Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015, June). Weight uncertainty in neural network. In International conference on machine learning (pp. 1613-1622). PMLR . http://proceedings.mlr.press/v37/blundell15.html

Immer, A., Korzepa, M., & Bauer, M. (2021, March). Improving predictions of Bayesian neural nets via local linearization. In International Conference on Artificial Intelligence and Statistics (pp. 703-711). PMLR . http://proceedings.mlr.press/v130/immer21a.html

Fortuin, V. (2022). Priors in bayesian deep learning: A review. International Statistical Review, 90(3), 563-591. https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/insr.12502

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity