University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > POSTPONED till next week!!!! Work in progress: Making efficient use of language models for theorem proving

POSTPONED till next week!!!! Work in progress: Making efficient use of language models for theorem proving

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

Postponed till next week

Join us on Zoom

Language models are neural networks trained on broad data and adaptable to a wide range of downstream tasks. They have shown impressive capabilities in domains such as NLP , vision, robotics, and reasoning. For mathematical reasoning, several SOTA results have been achieved by language models, but the huge computational cost prevents them from a wider adoption in the formal mathematics community. We introduce two methods to make efficient use of language models for theorem proving – one based on augmenting the language model prompt with previous proof steps and the other based on sampling output candidates of larger quantity and diversity. Our model improved the state-of-the-art theorem proving success rate on the Archive of Formal Proofs from 33.2% to 39.6%, without incurring any further computational cost.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity