tailieunhanh - Báo cáo khoa học: "Improvement of a Whole Sentence Maximum Entropy Language Model Using Grammatical Features"

In this paper, we propose adding long-term grammatical information in a Whole Sentence Maximun Entropy Language Model (WSME) in order to improve the performance of the model. The grammatical information was added to the WSME model as features and were obtained from a Stochastic Context-Free grammar. Finally, experiments using a part of the Penn Treebank corpus were carried out and significant improvements were acheived. | Improvement of a Whole Sentence Maximum Entropy Language Model Using Grammatical Features Fredy Amaya and Jose Miguel Benedi Departamento de Sistemas Informaticos y Computation Universidad Politecnica de Valencia Camino de vera s n 46022-Valencia Spain famaya jbenedi @ Abstract In this paper we propose adding long-term grammatical information in a Whole Sentence Maximun Entropy Language Model WSME in order to improve the performance of the model. The grammatical information was added to the WSME model as features and were obtained from a Stochastic Context-Free grammar. Finally experiments using a part of the Penn Treebank corpus were carried out and significant improvements were acheived. 1 Introduction Language modeling is an important component in computational applications such as speech recognition automatic translation optical character recognition information retrieval etc. Jelinek 1997 Borthwick 1997 . Statistical language models have gained considerable acceptance due to the efficiency demonstrated in the fields in which they have been applied Bahal et al. 1983 Jelinek et al. 1991 Ratnapharkhi 1998 Borthwick 1999 . Traditional statistical language models calculate the probability of a sentence s using the chain rule p s p Ỵ p wị hi 1 _____________ i i This work has been partially supported by the Spanish CYCIT under contract TIC98 0423-C06 . Granted by Universidad del Cauca Popayan Colombia where hi Wi. Wị-1 which is usually known as the history of Wj. The effort in the language modeling techniques is usually directed to the estimation ofp wị ỉị . The language model defined by the expression p wi hi is named the conditional language model. In principle the determination of the conditional probability in 1 is expensive because the possible number of word sequences is very great. Traditional conditional language models assume that the probability of the word does not depend on the entire history and the history is limited by an .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.