tailieunhanh - Báo cáo khoa học: "Assessing Dialog System User Simulation Evaluation Measures Using Human Judges"

Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems’ logs. However, the validity of these automatic measures has not been fully proven. In this study, we first recruit human judges to assess the quality of three simulated dialog corpora and then use human judgments as the gold standard to validate the conclusions drawn from the automatic measures. We observe that it is hard for the human judges to reach good agreement when asked to rate the quality of the dialogs from given perspectives. However, the human ratings give consistent ranking. | Assessing Dialog System User Simulation Evaluation Measures Using Human Judges Hua Ai University of Pittsburgh Pittsburgh Pa 15260 UsA hua@ Diane J. Litman University of Pittsburgh Pittsburgh pa 15260 UsA litman@ Abstract Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems logs. However the validity of these automatic measures has not been fully proven. In this study we first recruit human judges to assess the quality of three simulated dialog corpora and then use human judgments as the gold standard to validate the conclusions drawn from the automatic measures. We observe that it is hard for the human judges to reach good agreement when asked to rate the quality of the dialogs from given perspectives. However the human ratings give consistent ranking of the quality of simulated corpora generated by different simulation models. When building prediction models of human judgments using previously proposed automatic measures we find that we cannot reliably predict human ratings using a regression model but we can predict human rankings by a ranking model. 1 Introduction User simulation has been widely used in different phases in spoken dialog system development. In the system development phase user simulation is used in training different system components. For example Levin et al. 2000 and Scheffler 2002 exploit user simulations to generate large corpora for using Reinforcement Learning to develop dialog strategies while Chung 2004 implement user simulation to train the speech recognizer and understanding components. While user simulation is considered to be more low-cost and time-efficient than experiments with human subjects one major concern is how well the state-of-the-art user simulations can mimic human user behaviors and how well they can substitute for human users in a variety of tasks. Schatzmann et al. 2005 propose a set of evaluation measures to .