Multimodal interfaces: Challenges and perspectives
Abstract
The development of interfaces has been a technology-driven process. However, the newly developed multimodal interfaces are using recognition-based technologies that must interpret human-speech, gesture, gaze, movement patterns, and other behavioral cues. As a result, the interface design requires a human-centered approach. In this paper we review the major approaches to multimodal Human Computer Interaction, giving an overview of the user and task modeling, and of the multimodal fusion. We highlight the challenges, open issues, and the future trends in multimodal interfaces research.