tailieunhanh - Báo cáo khoa học: "A Comparative Study of Parameter Estimation Methods for Statistical Natural Language Processing"
This paper presents a comparative study of five parameter estimation algorithms on four NLP tasks. Three of the five algorithms are well-known in the computational linguistics community: Maximum Entropy (ME) estimation with L2 regularization, the Averaged Perceptron (AP), and Boosting. We also investigate ME estimation with L1 regularization using a novel optimization algorithm, and BLasso, which is a version of Boosting with Lasso (L1) regularization. We first investigate all of our estimators on two re-ranking tasks: a parse selection task and a language model (LM) adaptation task. . | A Comparative Study of Parameter Estimation Methods for Statistical Natural Language Processing Jianfeng Gao Galen Andrew Mark Johnson Kristina Toutanova Microsoft Research Redmond WA 98052 jfgao galena kristout @ Brown University Providence RI 02912 mj@. edu Abstract This paper presents a comparative study of five parameter estimation algorithms on four NLP tasks. Three of the five algorithms are well-known in the computational linguistics community Maximum Entropy ME estimation with L2 regularization the Averaged Perceptron AP and Boosting. We also investigate ME estimation with L1 regularization using a novel optimization algorithm and BLasso which is a version of Boosting with Lasso L1 regularization. We first investigate all of our estimators on two re-ranking tasks a parse selection task and a language model LM adaptation task. Then we apply the best of these estimators to two additional tasks involving conditional sequence models a Conditional Markov Model CMM for part of speech tagging and a Conditional Random Field CRF for Chinese word segmentation. Our experiments show that across tasks three of the estimators ME estimation with L1 or L2 regularization and AP are in a near statistical tie for first place. 1 Introduction Parameter estimation is fundamental to many statistical approaches to NLP. Because of the high-dimensional nature of natural language it is often easy to generate an extremely large number of features. The challenge of parameter estimation is to find a combination of the typically noisy redundant features that accurately predicts the target output variable and avoids overfitting. Intuitively this can be achieved either by selecting a small number of highly-effective features and ignoring the others or by averaging over a large number of weakly informative features. The first intuition motivates feature selection methods such as Boosting and BLasso . Collins 2000 Zhao and Yu 2004 which usually work best when many .
đang nạp các trang xem trước