tailieunhanh - Báo cáo khoa học: "Learning Dependency-Based Compositional Semantics"

Compositional question answering begins by mapping questions to logical forms, but training a semantic parser to perform this mapping typically requires the costly annotation of the target logical forms. In this paper, we learn to map questions to answers via latent logical forms, which are induced automatically from question-answer pairs. In tackling this challenging learning problem, we introduce a new semantic representation which highlights a parallel between dependency syntax and efficient evaluation of logical forms. . | Learning Dependency-Based Compositional Semantics Percy Liang UC Berkeley pliang@ Michael I. Jordan UC Berkeley jordan@ Dan Klein UC Berkeley klein@ Abstract Compositional question answering begins by mapping questions to logical forms but training a semantic parser to perform this mapping typically requires the costly annotation of the target logical forms. In this paper we learn to map questions to answers via latent logical forms which are induced automatically from question-answer pairs. In tackling this challenging learning problem we introduce a new semantic representation which highlights a parallel between dependency syntax and efficient evaluation of logical forms. On two standard semantic parsing benchmarks GEO and JOBS our system obtains the highest published accuracies despite requiring no annotated logical forms. 1 Introduction What is the total population of the ten largest capitals in the US Answering these types of complex questions compositionally involves first mapping the questions into logical forms semantic parsing . Supervised semantic parsers Zelle and Mooney 1996 Tang and Mooney 2001 Ge and Mooney 2005 Zettlemoyer and Collins 2005 Kate and Mooney 2007 Zettlemoyer and Collins 2007 Wong and Mooney 2007 Kwiatkowski et al. 2010 rely on manual annotation of logical forms which is expensive. On the other hand existing unsupervised semantic parsers Poon and Domingos 2009 do not handle deeper linguistic phenomena such as quantification negation and superlatives. As in Clarke et al. 2010 we obviate the need for annotated logical forms by considering the end-to-end problem of mapping questions to answers. However we still model the logical form now as a latent variable to capture the complexities of language. Figure 1 shows our probabilistic model 590 parameters world Semantic Parsing x Evaluation question state with the largest area logical form answer z pe z Ị x y IzL Alaska Figure 1 Our probabilistic .

TÀI LIỆU LIÊN QUAN
TỪ KHÓA LIÊN QUAN