University of Cambridge > Talks.cam > Language Technology Lab Seminars > Natural Language Understanding and Generation with Abstract Meaning Representation

Natural Language Understanding and Generation with Abstract Meaning Representation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Qianchu Liu.

In this talk, I will discuss my recent work on parsing and generation with Abstract Meaning Representation (AMR). AMR is a semantic representation for natural language that represents sentences as graphs, where nodes represent concepts and edges represent semantic relations between them. Sentences are represented as graphs and not trees because nodes can have multiple incoming edges, called reentrancies. These are due to several linguistic phenomena such as control, coreference, and coordination.

I will present my work on AMR parsing (from text to AMR ) and AMR -to-text generation (from AMR to text). For the parsing task, we showed that it is possible to use techniques from tree parsing and adapt them to parse AMR graphs. To better analyze the quality of AMR parsers, we developed a set of fine-grained metrics to better assess parsers, including a metric for reentrancy prediction. Hence, we performed a study of the main causes of reentrancies in AMR and their impact on performance. For the generation task, we showed that neural encoders that have access to reentrancies outperform those who do not, demonstrating the importance of reentrancies also for generation.

I will also discuss the problem of using AMR for languages other than English. Annotating new AMR datasets for other languages is an expensive process and requires defining ad-hoc annotation guidelines for each new language. It is therefore reasonable to ask whether we can share AMR annotations across languages.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity