tailieunhanh - Báo cáo khoa học: "Creating Robust Supervised Classifiers via Web-Scale N-gram Data"

In this paper, we systematically assess the value of using web-scale N-gram data in state-of-the-art supervised NLP classifiers. We compare classifiers that include or exclude features for the counts of various N-grams, where the counts are obtained from a web-scale auxiliary corpus. We show that including N-gram count features can advance the state-of-the-art accuracy on standard data sets for adjective ordering, spelling correction, noun compound bracketing, and verb part-of-speech disambiguation. . | Creating Robust Supervised Classifiers via Web-Scale N-gram Data Shane Bergsma University of Alberta sbergsma@ Emily Pitler University of Pennsylvania epitler@ Dekang Lin Google Inc. lindek@ Abstract In this paper we systematically assess the value of using web-scale N-gram data in state-of-the-art supervised NLP classifiers. We compare classifiers that include or exclude features for the counts of various N-grams where the counts are obtained from a web-scale auxiliary corpus. We show that including N-gram count features can advance the state-of-the-art accuracy on standard data sets for adjective ordering spelling correction noun compound bracketing and verb part-of-speech disambiguation. More importantly when operating on new domains or when labeled training data is not plentiful we show that using web-scale N-gram features is essential for achieving robust performance. 1 Introduction Many NLP systems use web-scale N-gram counts Keller and Lapata 2003 Nakov and Hearst 2005 Brants et al. 2007 . Lapata and Keller 2005 demonstrate good performance on eight tasks using unsupervised web-based models. They show web counts are superior to counts from a large corpus. Bergsma et al. 2009 propose unsupervised and supervised systems that use counts from Google s N-gram corpus Brants and Franz 2006 . Web-based models perform particularly well on generation tasks where systems choose between competing sequences of output text such as different spellings as opposed to analysis tasks where systems choose between abstract labels such as part-of-speech tags or parse trees . In this work we address two natural and related questions which these previous studies leave open 1. Is there a benefit in combining web-scale counts with the features used in state-of-the-art supervised approaches 2. How well do web-based models perform on new domains or when labeled data is scarce We address these questions on two generation and two analysis tasks using .

TỪ KHÓA LIÊN QUAN