Graph Neural Networks Use Graphs When They Shouldn’t
- 👤 Speaker: Maya Bechler-Speicher, Tel Aviv University
- 📅 Date & Time: Friday 01 November 2024, 14:00 - 15:00
- 📍 Venue: MR2 Centre for Mathematical Sciences
Abstract
Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN , in some cases the best solution can be obtained by ignoring it. While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will. In this talk, I will show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it. By analyzing the implicit bias of gradient-descent learning of GNNs I will show that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data. I will prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent. Then, based on our empirical and theoretical findings, I will demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance. Finally, I will present a recent novel approach, Cayley Graph Propagation, for propagating information over special types of regular graphs – the Cayley graphs of the SL(2, Zn) special linear group, to improve overfitting and information bottlenecks.
Series This talk is part of the Cambridge Image Analysis Seminars series.
Included in Lists
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 01 November 2024, 14:00-15:00