Learning and Extrapolation in Graph Neural Networks
- đ¤ Speaker: Stefanie Jegelka (Massachusetts Institute of Technology)
- đ Date & Time: Tuesday 23 November 2021, 12:00 - 13:00
- đ Venue: Seminar Room 1, Newton Institute
Abstract
Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in computational chemistry, recommendation, pharmacy, reasoning, and many other areas. In this talk, I will show some recent results on learning with message-passing GNNs. In particular, GNNs possess important invariances and inductive biases that affect learning and generalization. Studying the effect of these inductive biases can be challenging, as they are affected by the architecture (structure and aggregation functions) and training algorithm and interplay with data and learning task. In particular, we study these biases for learning structured tasks, e.g., simulations or algorithms, and show how for such tasks, architecture choices affect generalization within and outside the training distribution. This talk is based on joint work with Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du and Ken-ichi Kawarabayashi.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Stefanie Jegelka (Massachusetts Institute of Technology)
Tuesday 23 November 2021, 12:00-13:00