tailieunhanh - Báo cáo khoa học: "Measures of Distributional Similarity"

We study distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrences. Our contributions are three-fold: an empirical comparison of a broad range of measures; a classification of similarity functions based on the information that they incorporate; and the introduction of a novel function that is superior at evaluating potential proxy distributions. | Measures of Distributional Similarity Lillian Lee Department of Computer Science Cornell University Ithaca NY 14853-7501 Abstract We study distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrences. Our contributions are three-fold an empirical comparison of a broad range of measures a classification of similarity functions based on the information that they incorporate and the introduction of a novel function that is superior at evaluating potential proxy distributions. 1 Introduction An inherent problem for statistical methods in natural language processing is that of sparse data the inaccurate representation in any training corpus of the probability of low frequency events. In particular reasonable events that happen to not occur in the training set may mistakenly be assigned a probability of zero. These unseen events generally make up a substantial portion of novel data for example Essen and Steinbiss 1992 report that 12 of the test-set bigrams in a 75 -25 split of one million words did not occur in the training partition. We consider here the question of how to estimate the conditional cooccurrence probability p v n of an unseen word pair n v drawn from some finite set N X V. Two state-of-the-art technologies are Katz s 1987 backoff method and Jelinek and Mercer s 1980 interpolation method. Both use P v to estimate P v n when n v is unseen essentially ignoring the identity of n. An alternative approach is distance-weighted averaging which arrives at an estimate for unseen cooccurrences by combining estimates for cooccurrences involving similar words 1 Az I X _ me5 n sim n Wl P v m P y n -------------- ----------- 1 EmeS n sim n m v where S n is a set of candidate similar words and sim n m is a function of the similarity between n and m. We focus on distributional rather than semantic similarity . Resnik 1995 because the goal of distance-weighted averaging is to smooth probability .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.