University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Neural Sequence Models for Mathematical Reasoning

Neural Sequence Models for Mathematical Reasoning

Add to your list(s) Download to your calendar using vCal

  • UserYuhuai(Tony) Wu, Stanford University & Google
  • ClockTuesday 14 June 2022, 16:00-17:00
  • HouseZoom.

If you have a question about this talk, please contact Mateja Jamnik.

RESCHEDULED, NOTE THE UNUSUAL TIME

Join us on Zoom

Advanced mathematical reasoning is unique in human intelligence, and it is a fundamental building block for many intellectual pursuits and scientific developments. In this talk, I will describe generic approaches to mathematical reasoning using neural sequence models. Unlike previous approaches with highly constrained action spaces, sequence models allow the prover to freely synthesize arbitrary proof steps, including the synthesis of conjectures, lemmas, and definitions. We created a tool that can help formal mathematicians prove theorems in Lean, and our model made contributions accepted by professional mathematicians, adding new proofs to the Lean Mathlib library. The aforementioned work highlights the power of modern sequence models for automated theorem proving. However, there are still major reasoning capabilities that are missing. First, for reasoning tasks such as theorem proving or program synthesis, one must be able to work with large and continuously changing theorem databases and code repositories. I introduced a new transformer architecture that can memorize the internal representations of past inputs, allowing the utilization of newly added facts or codes without the need for retraining. Second, reasoning is often the result of extended chains of thought, while current models mostly perform fast, intuitive pattern matching. I introduced a learning algorithm that enables models to generate explicit rationales when tackling challenging reasoning problems, by bootstrapping from a pre-trained large language model. I will conclude my talk with future research directions towards the goal of building a strong mathematical reasoning model.

Bio:

Yuhuai Tony Wu is a Postdoctoral Scholar at Stanford University working with Percy Liang and Jay McClelland, and a Research Scientist at Google. His long-term research interest is to build a machine that can automate mathematics. Towards this goal, his current research focuses on building fundamental reasoning architectures as well as improving the reasoning capabilities of large language models. During his Ph.D at University of Toronto, he was awarded a Google PhD Fellowship.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity