tailieunhanh - Báo cáo khoa học: "Generating image descriptions using dependency relational patterns"

This paper presents a novel approach to automatic captioning of geo-tagged images by summarizing multiple webdocuments that contain information related to an image’s location. The summarizer is biased by dependency pattern models towards sentences which contain features typically provided for different scene types such as those of churches, bridges, etc. Our results show that summaries biased by dependency pattern models lead to significantly higher ROUGE scores than both n-gram language models reported in previous work and also Wikipedia baseline summaries. . | Generating image descriptions using dependency relational patterns Ahmet Aker University of Sheffield Robert Gaizauskas University of Sheffield Abstract This paper presents a novel approach to automatic captioning of geo-tagged images by summarizing multiple webdocuments that contain information related to an image s location. The summa-rizer is biased by dependency pattern models towards sentences which contain features typically provided for different scene types such as those of churches bridges etc. Our results show that summaries biased by dependency pattern models lead to significantly higher ROUGE scores than both n-gram language models reported in previous work and also Wikipedia baseline summaries. Summaries generated using dependency patterns also lead to more readable summaries than those generated without dependency patterns. 1 Introduction The number of images tagged with location information on the web is growing rapidly facilitated by the availability of GPS Global Position System equipped cameras and phones as well as by the widespread use of online social sites. The majority of these images are indexed with GPS coordinates latitude and longitude only and or have minimal captions. This typically small amount of textual information associated with the image is of limited usefulness for image indexing organization and search. Therefore methods which could automatically supplement the information available for image indexing and lead to improved image retrieval would be extremely useful. Following the general approach proposed by Aker and Gaizauskas 2009 in this paper we describe a method for automatic image captioning or caption enhancement starting with only a scene or subject type and a set of place names pertaining to an image - for example church St. Paul s London . Scene type and place names can be obtained automatically given GPS coordinates and compass information using techniques such as those

TỪ KHÓA LIÊN QUAN
crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.