tailieunhanh - Báo cáo khoa học: "Sparse Information Extraction: Unsupervised Language Models to the Rescue"
Even in a massive corpus such as the Web, a substantial fraction of extractions appear infrequently. This paper shows how to assess the correctness of sparse extractions by utilizing unsupervised language models. The R EALM system, which combines HMMbased and n-gram-based language models, ranks candidate extractions by the likelihood that they are correct. Our experiments show that R EALM reduces extraction error by 39%, on average, when compared with previous work. | Sparse Information Extraction Unsupervised Language Models to the Rescue Doug Downey Stefan Schoenmackers and Oren Etzioni Turing Center Department of Computer Science and Engineering University of Washington Box 352350 Seattle WA 98195 USA ddowney stef etzioni @ Abstract Even in a massive corpus such as the Web a substantial fraction of extractions appear infrequently. This paper shows how to assess the correctness of sparse extractions by utilizing unsupervised language models. The Realm system which combines HMM-based and n-gram-based language models ranks candidate extractions by the likelihood that they are correct. Our experiments show that Realm reduces extraction error by 39 on average when compared with previous work. Because Realm pre-computes language models based on its corpus and does not require any hand-tagged seeds it is far more scalable than approaches that learn models for each individual relation from hand-tagged data. Thus Realm is ideally suited for open information extraction where the relations of interest are not specified in advance and their number is potentially vast. 1 Introduction Information Extraction IE from text is far from infallible. In response researchers have begun to exploit the redundancy in massive corpora such as the Web in order to assess the veracity of extractions . Downey et al. 2005 Etzioni et al. 2005 Feldman et al. 2006 . In essence such methods utilize extraction patterns to generate candidate extractions . Istanbul and then assess each candidate by computing co-occurrence statistics between 696 the extraction and words or phrases indicative of class membership . cities such as . However Zipf s Law governs the distribution of extractions. Thus even the Web has limited redundancy for less prominent instances of relations. Indeed 50 of the extractions in the data sets employed by Downey et al. 2005 appeared only once. As a result Downey etal. s model and related methods had no way of .
đang nạp các trang xem trước