BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Sign and Basis Invariant Networks for Spectral Graph Representatio
 n Learning - Joshua Robinson\, MIT 
DTSTART:20221012T153000Z
DTEND:20221012T163000Z
UID:TALK180422@talks.cam.ac.uk
CONTACT:Iulia Duta
DESCRIPTION:Eigenvectors computed from data arise in various scenarios inc
 luding principal component analysis\, and matrix factorizations. Another k
 ey example is the eigenvectors of the graph Laplacian\, which encode infor
 mation about the structure of a graph or manifold. An important recent app
 lication of Laplacian eigenvector is to graph positional encodings\, which
  have been used to develop more powerful graph architectures. However\, ei
 genvectors have symmetries that should be respected by models taking eigen
 vector inputs: (i) sign flips\, since if v is an eigenvector then so is -v
 \; and (ii) more general basis symmetries\, which occur in higher dimensio
 nal eigenspaces with infinitely many choices of basis eigenvectors. We int
 roduce SignNet and BasisNet---new neural network architectures that are si
 gn and basis invariant. We prove that our networks are universal\, i.e.\, 
 they can approximate any continuous function of eigenvectors with the desi
 red invariances. Moreover\, when used with Laplacian eigenvectors\, our ar
 chitectures are provably expressive for graph representation learning: the
 y can approximate—and go beyond—any spectral graph convolution\, and 
 can compute spectral invariants that go beyond message passing neural netw
 orks. Experiments show the strength of our networks for molecular graph re
 gression\, learning expressive graph representations\, and more.
LOCATION:Lecture Theater 1
END:VEVENT
END:VCALENDAR
