tailieunhanh - Báo cáo khoa học: "Archivus: A multimodal system for multimedia meeting browsing and retrieval"

This paper presents Archivus, a multimodal language-enabled meeting browsing and retrieval system. The prototype is in an early stage of development, and we are currently exploring the role of natural language for interacting in this relatively unfamiliar and complex domain. We briefly describe the design and implementation status of the system, and then focus on how this system is used to elicit useful data for supporting hypotheses about multimodal interaction in the domain of meeting retrieval and for developing NLP modules for this specific domain. . | Archivus A multimodal system for multimedia meeting browsing and retrieval Marita Ailomaa Miroslav Melichar Martin Rajman Artificial Intelligence Laboratory Ecole Polytechnique Federale de Lausanne CH-1015 Lausanne Switzerland Agnes Lisowska Susan Armstrong ISSCO TIM ETI University of Geneva CH-1211 Geneva Switzerland Abstract This paper presents Archivus a multimodal language-enabled meeting browsing and retrieval system. The prototype is in an early stage of development and we are currently exploring the role of natural language for interacting in this relatively unfamiliar and complex domain. We briefly describe the design and implementation status of the system and then focus on how this system is used to elicit useful data for supporting hypotheses about multimodal interaction in the domain of meeting retrieval and for developing NLP modules for this specific domain. 1 Introduction In the past few years there has been an increasing interest in research on developing systems for efficient recording of and access to multimedia meeting data1. This work often results in videos of meetings transcripts electronic copies of documents referenced as well as annotations of various kinds on this data. In order to exploit this work a user needs to have an interface that allows them to retrieve and browse the multimedia meeting data easily and efficiently. In our work we have developed a multimodal voice keyboard mouse pen meeting browser Archivus whose purpose is to allow users to access multimedia meeting data in a way that is most natural to them. We believe that since this is a new domain of interaction users can be encouraged to 1The IM2 project http the AMI project The Meeting Room Project at Carnegie Mellon University http mie and rich transcription of natural and impromptu meetings at ICSI Berkeley http Speech EARS try out and .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.