tailieunhanh - Báo cáo khoa học: "Randomised Language Modelling for Statistical Machine Translation"
A Bloom filter (BF) is a randomised data structure for set membership queries. Its space requirements are significantly below lossless information-theoretic lower bounds but it produces false positives with some quantifiable probability. Here we explore the use of BFs for language modelling in statistical machine translation. We show how a BF containing n-grams can enable us to use much larger corpora and higher-order models complementing a conventional n-gram LM within an SMT system. We also consider (i) how to include approximate frequency information efficiently within a BF and (ii) how to reduce the error rate of these models by. | Randomised Language Modelling for Statistical Machine Translation David Talbot and Miles Osborne School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh Eh8 9LW uK miles@ Abstract A Bloom filter BF is a randomised data structure for set membership queries. Its space requirements are significantly below lossless information-theoretic lower bounds but it produces false positives with some quantifiable probability. Here we explore the use of BFs for language modelling in statistical machine translation. We show how a BF containing n-grams can enable us to use much larger corpora and higher-order models complementing a conventional n-gram LM within an SMT system. We also consider i how to include approximate frequency information efficiently within a BF and ii how to reduce the error rate of these models by first checking for lower-order sub-sequences in candidate ngrams. Our solutions in both cases retain the one-sided error guarantees of the BF while taking advantage of the Zipf-like distribution of word frequencies to reduce the space requirements. 1 Introduction Language modelling LM is a crucial component in statistical machine translation SMT . Standard ngram language models assign probabilities to translation hypotheses in the target language typically as smoothed trigram models . Chiang 2005 . Although it is well-known that higher-order LMs and models trained on additional monolingual corpora can yield better translation performance the chal- 512 lenges in deploying large LMs are not trivial. Increasing the order of an n-gram model can result in an exponential increase in the number of parameters for corpora such as the English Gigaword corpus for instance there are 300 million distinct trigrams and over billion 5-grams. Since a LM may be queried millions of times per sentence it should ideally reside locally in memory to avoid time-consuming remote or disk-based look-ups. Against this background we .
đang nạp các trang xem trước