University of Cambridge > > Machine Learning Reading Group @ CUED > The Bayesian Learning Rule for Adaptive AI

The Bayesian Learning Rule for Adaptive AI

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design AI systems that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and AI. I will show that a wide-variety of machine-learning algorithms are instances of a single learning-rule called the Bayesian learning rule. The rule unravels a dual perspective yielding new adaptive mechanisms for machine-learning based AI systems. My hope is to convince the audience that Bayesian principles are indispensable for an AI that learns as efficiently as we do.

Main reference: - The Bayesian Learning Rule, (Preprint) M.E. Khan, H. Rue [ arXiv ] [ Tweet ]

Additional references
  • Knowledge-Adaptation Priors, (NeurIPS 2021) M.E. Khan, S. Swaroop [ arXiv ] [ OpenReview ] [ Slides ] [ Tweet ] [ SlidesLive Video ]
  • Dual Parameterization of Sparse Variational Gaussian Processes, (NeurIPS 2021) P. Chang, V. ADAM , M.E. Khan, A. Solin [ arXiv ]
  • Continual Deep Learning by Functional Regularisation of Memorable Past (NeurIPS 2020) P. Pan, S. Swaroop, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan [ arXiv ] [ Code ] [ Poster ]
  • Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019) M.E. Khan, A. Immer, E. Abedi, M. korzepa. [ arXiv ] [ Code ]

Bio: Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also an external professor at the Okinawa Institute of Science and Technology (OIST). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For more than a decade, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2022, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity