tailieunhanh - Báo cáo khoa học: "Classification of Feedback Expressions in Multimodal Data"

This paper addresses the issue of how linguistic feedback expressions, prosody and head gestures, . head movements and face expressions, relate to one another in a collection of eight video-recorded Danish map-task dialogues. The study shows that in these data, prosodic features and head gestures significantly improve automatic classification of dialogue act labels for linguistic expressions of feedback. | Classification of Feedback Expressions in Multimodal Data Costanza Navarretta University of Copenhagen Centre for Language Technology CST Njalsgade 140 2300-DK Copenhagen costanza@ Patrizia Paggio University of Copenhagen Centre for Language Technology CST Njalsgade 140 2300-DK Copenhagen paggio@ Abstract This paper addresses the issue of how linguistic feedback expressions prosody and head gestures . head movements and face expressions relate to one another in a collection of eight video-recorded Danish map-task dialogues. The study shows that in these data prosodic features and head gestures significantly improve automatic classification of dialogue act labels for linguistic expressions of feedback. 1 Introduction Several authors in communication studies have pointed out that head movements are relevant to feedback phenomena see McClave 2000 for an overview . Others have looked at the application of machine learning algorithms to annotated multimodal corpora. For example Jokinen and Ragni 2007 and Jokinen et al. 2008 find that machine learning algorithms can be trained to recognise some of the functions of head movements while Reidsma et al. 2009 show that there is a dependence between focus of attention and assignment of dialogue act labels. Related are also the studies by Rieks op den Akker and Schulz 2008 and Murray and Renals 2008 both achieve promising results in the automatic segmentation of dialogue acts using the annotations in a large multimodal corpus. Work has also been done on prosody and gestures in the specific domain of map-task dialogues also targeted in this paper. Sridhar et al. 2009 obtain promising results in dialogue act tagging of the Switchboard-DAMSL corpus using lexical syntactic and prosodic cues while Gravano and Hirschberg 2009 examine the relation between particular acoustic and prosodic turn-yielding cues and turn taking in a large corpus of task-oriented dialogues. Louwerse et al. 2006 and Louwerse et al. 2007 .

TỪ KHÓA LIÊN QUAN