tailieunhanh - Báo cáo khoa học: "Constituent Parsing with Incremental Sigmoid Belief Networks"

We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By constructing a more accurate but still tractable approximation, we significantly improve parsing accuracy, suggesting that ISBNs provide a good idealization for parsing. . | Constituent Parsing with Incremental Sigmoid Belief Networks Ivan Titov Department of Computer Science University of Geneva 24 rue General Dufour CH-1211 Geneve 4 Switzerland James Henderson School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW United Kingdom Abstract We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By constructing a more accurate but still tractable approximation we significantly improve parsing accuracy suggesting that ISBNs provide a good idealization for parsing. This generative model of parsing achieves state-of-the-art results on WSJ text and 8 error reduction over the baseline neural network parser. 1 Introduction Latent variable models have recently been of increasing interest in Natural Language Processing and in parsing in particular . Koo and Collins 2005 Matsuzaki et al. 2005 Riezler et al. 2002 . Latent variables provide a principled way to include features in a probability model without needing to have data labeled with those features in advance. Instead a labeling with these features can be induced as part of the training process. The difficulty with latent variable models is that even small numbers of latent variables can lead to computationally intractable inference . decoding parsing . In this paper we propose a solution to this problem based on dynamic Sigmoid Belief Networks SBNs Neal 1992 . The dynamic SBNs 632 which we peopose called Incremental Sigmoid Belief Networks ISBNs have large numbers of latent variables which makes exact inference intractable. However they can be approximated sufficiently well to build fast and accurate statistical parsers which induce features

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.