Affiliations: Department of Computer Science, 209 Dunn Hall,
University of Memphis, Memphis, TN, 38152, USA. E-mail: [email protected] | Learning Research and Development Center, 3939 Ohara
st, RM 523, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
E-mail: [email protected] | Department of Psychology, 202 Psychology Bldg,
University of Memphis, Memphis, TN, 38152, USA. E-mail: [email protected]
Abstract: This paper investigates how frequent conversation patterns from a
mixed-initiative dialogue with an intelligent tutoring system, AutoTutor, can
significantly predict users' affective states (e.g. confusion, eureka,
frustration). This study adopted an emote-aloud procedure in which participants
were recorded as they verbalized their affective states while interacting with
AutoTutor. The tutor-tutee interaction was coded on scales of conversational
directness (the amount of information provided by the tutor to the learner,
with a theoretical ordering of assertion > prompt for particular information
> hint), feedback (positive, neutral, negative), and content coverage scores
for each student contribution obtained from the tutor's log files.
Correlation and regression analyses confirmed the hypothesis that dialogue
features could significantly predict the affective states of confusion, eureka,
and frustration. Standard classification techniques were used to assess the
reliability of the automatic detection of learners' affect from the conversation
features. We discuss the prospects of extending AutoTutor into an
affect-sensing intelligent tutoring system.