Đang chuẩn bị liên kết để tải về tài liệu:
Báo cáo khoa học: "Simple Semi-supervised Dependency Parsing"
Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. . | Simple Semi-supervised Dependency Parsing Terry Koo Xavier Carreras and Michael Collins MIT CSAIL Cambridge MA 02139 USA maestro carreras mcollins @csail.mit.edu Abstract We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example in the case of English unlabeled second-order parsing we improve from a baseline accuracy of 92.02 to 93.16 and in the case of Czech unlabeled second-order parsing we improve from a baseline accuracy of 86.13 to 87.13 . In addition we demonstrate that our method also improves performance when small amounts of training data are available and can roughly halve the amount of supervised data required to reach a desired level of performance. 1 Introduction In natural language parsing lexical information is seen as crucial to resolving ambiguous relationships yet lexicalized statistics are sparse and difficult to estimate directly. It is therefore attractive to consider intermediate entities which exist at a coarser level than the words themselves yet capture the information necessary to resolve the relevant ambiguities. In this paper we introduce lexical intermediaries via a simple two-stage semi-supervised approach. First we use a large unannotated corpus to define word clusters and then we use that clustering to construct a new cluster-based feature mapping for a discriminative learner. We are thus relying on the ability of discriminative learning methods to identify and exploit informative features while remaining agnostic as to the origin of such features. To demonstrate the effectiveness of our .