tailieunhanh - Báo cáo khoa học: "Corpus representativeness for syntactic information acquisition"

The question we have addressed here is to define the size and composition of the corpus we would need in order to get necessary and sufficient information for Machine Learning techniques to induce that type of information. Representativeness of a corpus is a topic largely dealt with, especially in corpus linguistics. One of the standard references is Biber (1993) where the author offers guidelines for corpus design to characterize a language. | Corpus representativeness for syntactic information acquisition Núria BEL IULA Universitat Pompeu Fabra La Rambla 30-32 08002 Barcelona Spain Abstract This paper refers to part of our research in the area of automatic acquisition of computational lexicon information from corpus. The present paper reports the ongoing research on corpus representativeness. For the task of inducing information out of text we wanted to fix a certain degree of confidence on the size and composition of the collection of documents to be observed. The results show that it is possible to work with a relatively small corpus of texts if it is tuned to a particular domain. Even more it seems that a small tuned corpus will be more informative for real parsing than a general corpus. 1 Introduction The coverage of the computational lexicon used in deep Natural Language Processing NLP is crucial for parsing success. But rather frequently the absence of particular entries or the fact that the information encoded for these does not cover very specific syntactic contexts --as those found in technical texts make high informative grammars not suitable for real applications. Moreover this poses a real problem when porting a particular application from domain to domain as the lexicon has to be re-encoded in the light of the new domain. In fact in order to minimize ambiguities and possible over-generation application based lexicons tend to be tuned for every specific domain addressed by a particular application. Tuning of lexicons to different domains is really a delaying factor in the deployment of NLP applications as it raises its costs not only in terms of money but also and crucially in terms of time. A desirable solution would be a plug and play system that given a collection of documents supplied by the customer could induce a tuned lexicon. By tuned we mean full coverage both in terms of 1 entries detecting new items and assigning them a syntactic behavior pattern and 2 syntactic

TÀI LIỆU LIÊN QUAN