BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Graph Neural Networks Use Graphs When They Shouldn’t - Maya Bech
 ler-Speicher\, Tel Aviv University
DTSTART:20241101T140000Z
DTEND:20241101T150000Z
UID:TALK222001@talks.cam.ac.uk
CONTACT:Ferdia Sherry
DESCRIPTION:Predictions over graphs play a crucial role in various domains
 \, including social networks and medicine.\nGraph Neural Networks (GNNs) h
 ave emerged as the dominant approach for learning on graph data.\nAlthough
  a graph-structure is provided as input to the GNN\, in some cases the bes
 t solution can be obtained by ignoring it.\nWhile GNNs have the ability to
  ignore the graph-structure in such cases\, it is not clear that they will
 .\nIn this talk\, I will show that GNNs actually tend to overfit the given
  graph-structure. Namely\, they use it even when a better solution can be 
 obtained by ignoring it.\nBy analyzing the implicit bias of gradient-desce
 nt learning of GNNs I will show that when the ground truth function does n
 ot use the graphs\, GNNs are not guaranteed to learn a solution that ignor
 es the graph\, even with infinite data.\nI will prove that within the fami
 ly of regular graphs\, GNNs are guaranteed to extrapolate when learning wi
 th gradient descent.\nThen\, based on our empirical and theoretical findin
 gs\, I will demonstrate on real-data how regular graphs can be leveraged t
 o reduce graph overfitting and enhance performance. Finally\, I will pre
 sent a recent novel approach\, Cayley Graph Propagation\, for propagating 
 information over special types of regular graphs - the Cayley graphs of th
 e SL(2\, Zn) special linear group\, to improve overfitting and information
  bottlenecks. 
LOCATION:MR2 Centre for Mathematical Sciences
END:VEVENT
END:VCALENDAR
