tailieunhanh - Báo cáo khoa học: "Adaptive Language Modeling for Word Prediction"

We present the development and tuning of a topic-adapted language model for word prediction, which improves keystroke savings over a comparable baseline. We outline our plans to develop and integrate style adaptations, building on our experience in topic modeling to dynamically tune the model to both topically and stylistically relevant texts. | Adaptive Language Modeling for Word Prediction Keith Trnka University of Delaware Newark DE 19716 trnka@ Abstract We present the development and tuning of a topic-adapted language model for word prediction which improves keystroke savings over a comparable baseline. We outline our plans to develop and integrate style adaptations building on our experience in topic modeling to dynamically tune the model to both topically and stylistically relevant texts. 1 Introduction People who use Augmentative and Alternative Communication AAC devices communicate slowly often below 10 words per minute wpm compared to 150 wpm or higher for speech Newell et al. 1998 . AAC devices are highly specialized keyboards with speech synthesis typically providing single-button input for common words or phrases but requiring a user to type letter-by-letter for other words called fringe vocabulary. Many commercial systems . PRC s ECO and researchers Li and Hirst 2005 Trnka et al. 2006 Wandmacher and Antoine 2007 Matiasek and Baroni 2003 have leveraged word prediction to help speed AAC communication rate. While the user is typing an utterance letter-by-letter the system continuously provides potential completions of the current word to the user which the user may select. The list of predicted words is generated using a language model. At best modern devices utilize a trigram model and very basic recency promotion. However one of the lamented weaknesses of ngram models is their sensitivity to the training data. They require substantial training data to be accurate and increasingly more data as more of the context is utilized. For example Lesher et al. 1999 demonstrate that bigram and trigram models for word prediction are not saturated even when trained on 3 million words in contrast to a unigram model. In addition to the problem of needing substantial amounts of training text to build a reasonable model ngrams are sensitive to the difference between training and testing user .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.