tailieunhanh - Báo cáo khoa học: "Automatic Acquisition of Language Model based on Head-Dependent Relation between Words"

Language modeling is to associate a sequence of words with a priori probability, which is a key part of many natural language applications such as speech recognition and statistical machine translation. In this paper, we present a language modeling based on a kind of simple dependency grammar. The grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper. Our experiments show that the proposed model performs better than n-gram models at 11% to reductions in test corpus entropy. . | Automatic Acquisition of Language Model based on Head-Dependent Relation between Words Seungmi Lee and Key-Sun Choi Department of Computer Science Center for Artificial Intelligence Research Korea Advanced Institute of Science and Technology e-mail leesm kschoi Abstract Language modeling is to associate a sequence of words with a priori probability which is a key part of many natural language applications such as speech recognition and statistical machine translation. In this paper we present a language modeling based on a kind of simple dependency grammar. The grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper. Our experiments show that the proposed model performs better than n-gram models at 11 to reductions in test corpus entropy. 1 Introduction Language modeling is to associate a priori probability to a sentence. It is a key part of many natural language applications such as speech recognition and statistical machine translation. Previous works for language modeling can be broadly divided into two approaches one is n-gram-based and the other is grammar-based. N-gram model estimates the probability of a sentence as the product of the probability of each word in the sentence. It assumes that probability of the nth word is dependent on the previous n 1 words. The n-gram probabilities are estimated by simply counting the n-gram frequencies in a training corpus. In some cases class or part of speech n-grams are used instead of word n-grams Brown et al. 1992 Chang and Chen 1996 . N-gram model has been widely used so far but it has always been clear that n-gram can not represent long distance dependencies. In contrast with n-gram model grammarbased approach assigns syntactic structures to a sentence and computes the probability of the sentence using the probabilities of the structures. Long distance dependencies can

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.