tailieunhanh - Báo cáo khoa học: "IDENTIFYING RELEVANT EXPLANATIONS"

W h e n h u m a n tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material,and to avoid repeating old material that would distract the student from what is new. Producing a system that displays such behavior involves finding an efficient way to identify which previous explanations (if any) are relevant to the current explanation. | IDENTIFYING RELEVANT PRIOR EXPLANATIONS James A. Rosenblum Department of Computer Science University of Pittsburgh Pittsburgh PA 15260 USA Internet jr@ Abstract When human tutors engage in dialogue they freely exploit all aspects of the mutually known context including the previous discourse utterances that do not draw on previous discourse seem awkward unnatural or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material and to avoid repeating old material that would distract the student from what is new. Producing a system that displays such behavior involves finding an efficient way to identify which previous explanations if any are relevant to the current explanation task. Thus we are implementing a system that uses a case-based reasoning approach to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in human-human tutorial dialogues. Introduction and Motivation We are building an explanation component for an existing intelligent training system Sherlock Lesgold et al. 1992 which trains avionics technicians to troubleshoot electronic equipment. Using Sherlock trainees solve problems with minimal tutor interaction and then review their troubleshooting in a post-problem reflective follow-up rfu session where the tutor replays each student action and assesses it as good or as could be improved - . After a step is re- played the student can ask the tutor to justify its assessment. As an example of the way in which human tutors exploit previous discourse consider the dialogue in Figure 1 taken from our data. Even though the student has made the same mistake twice the second explanation looks quite different from the first. Yet the two explanations are related to one another in an .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.