tailieunhanh - Báo cáo khoa học: "New Techniques for Context Modeling"

We introduce three new techniques for statistical language models: extension modeling, nonmonotonic contexts, and the divergence heuristic. Together these techniques result in language models that have few states, even fewer parameters, and low message entropies. | New Techniques for Context Modeling Eric Sven Ristad and Robert G. Thomas Department of Computer Science Princeton University ristad rgt @ Abstract We introduce three new techniques for statistical language models extension modeling nonmonotonic contexts and the divergence heuristic. Together these techniques result in language models that have few states even fewer parameters and low message entropies. . 1 Introduction Current approaches to automatic speech and handwriting transcription demand a strong language model with a small number of states and an even smaller number of parameters. If the model entropy is high then transcription results are abysmal. If there are too many states then transcription becomes computationally infeasible. And if there are too many parameters then overfitting occurs and predictive performance degrades. In this paper we introduce three new techniques for statistical language models extension modeling nonmonotonic contexts and the divergence heuristic. Together these techniques result in language models that have few states even fewer parameters and low message entropies. For example our techniques achieve a message entropy of bits char on the Brown corpus using only 89 325 parameters. By modestly increasing the number of model parameters in a principled manner our techniques are able to further reduce the message entropy of the Brown Corpus to bits In contrast the character 4-gram model requires 250 times as many parameters in order to achieve a message entropy of only bits char. Given the logarithmic nature of codelengths a savings of bits char is quite significant. The fact that our model performs significantly better using vastly fewer parameters argues The only change to our model selection procedure is to replace the incremental cost formula lLộ w Sz t with a constant cost of 2 bits extension. This small change reduces the test message entropy from to bits char but it also .

TỪ KHÓA LIÊN QUAN
crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.