University of Cambridge > Talks.cam > Language Technology Lab Seminars > Program Synthesis and Understanding with Pretrained Language Models

Program Synthesis and Understanding with Pretrained Language Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

In the last few years there has been a tremendous growth in the topic of understanding and generation using NLP -grounded deep learning models. While earlier approaches were able to deal with just the simplest tasks, the recent application of Pretrained Language Models (PLMs), specifically trained with code snippets, has brought new capabilities, especially for the task of text-to-code generation or program synthesis. This talk will discuss the reasons of the recent growth of interest on this topic. We will discuss the main differences between working on natural language and programming language. We will provide an overview of the latest approaches, their intended use and limitations. Among other models we will introduce PanguCoder, our brand new Huawei in-house model for code synthesis, which constitutes the building block of the AI-assisted tool for code generation from Huawei. We will cover existing datasets and benchmarks are useful to make a fair comparison among different approaches. We will mention CodeXGlue, a benchmark that cover most common code-oriented tasks, and HumanEval which is, at this time, the de facto benchmark for text-to-code generation. Finally, we will show some applications in the real word and future perspectives in the area

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity