tailieunhanh - Báo cáo khoa học: "A Generalized-Zero-Preserving Method for Compact Encoding of Concept Lattices"

Constructing an encoding of a concept lattice using short bit vectors allows for efficient computation of join operations on the lattice. Join is the central operation any unification-based parser must support. We extend the traditional bit vector encoding, which represents join failure using the zero vector, to count any vector with less than a fixed number of one bits as failure. | A Generalized-Zero-Preserving Method for Compact Encoding of Concept Lattices Matthew Skala School of Computer Science University of Waterloo mskala@ Victoria Krakovna Janos Kramar Dept. of Mathematics University of Toronto vkrakovna jkramar @ Gerald Penn Dept. of Computer Science University of Toronto gpenn@ Abstract Constructing an encoding of a concept lattice using short bit vectors allows for efficient computation of join operations on the lattice. Join is the central operation any unification-based parser must support. We extend the traditional bit vector encoding which represents join failure using the zero vector to count any vector with less than a fixed number of one bits as failure. This allows non-joinable elements to share bits resulting in a smaller vector size. A constraint solver is used to construct the encoding and a variety of techniques are employed to find near-optimal solutions and handle timeouts. An evaluation is provided comparing the extended representation of failure with traditional bit vector techniques. 1 Introduction The use of bit vectors is almost as old as HPSG parsing itself. Since they were first suggested in the programming languages literature Ait-Kaci et al. 1989 as a method for computing the unification of two types without table lookup bit vectors have been attractive because of three speed advantages The classical bit vector encoding uses bitwise AND to calculate type unification. This is hard to beat. Hash tables the most common alternative involve computing the Dedekind-MacNeille completion DMC at compile time if the input type hierarchy is not a bounded-complete partial order. That is exponential time in the worst case most bit vector methods avoid explicitly computing it. With large type signatures the table that indexes unifiable pairs of types may be so large that it pushes working parsing memory into swap. This loss of locality of reference costs time. Why isn t everyone using bit

TỪ KHÓA LIÊN QUAN
crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.