Abstract: Natural and intuitive interaction between users and complex systems
is a crucial research topic in human-computer interaction. A major direction is
the definition and implementation of systems with natural language
understanding capabilities. The interaction in natural language is often
performed by means of systems called chatbots. A chatbot is a conversational
agent with a proper knowledge base able to interact with users. Chatbots
appearance can be very sophisticated with 3D avatars and speech processing
modules. However the interaction between the system and the user is only
performed through textual areas for inputs and replies. An interaction able to
add to natural language also graphical widgets could be more effective. On the
other side, a graphical interaction involving also the natural language can
increase the comfort of the user instead of using only graphical widgets. In
many applications multi-modal communication must be preferred when the user and
the system have a tight and complex interaction. Typical examples are cultural
heritages applications (intelligent museum guides, picture browsing) or systems
providing the user with integrated information taken from different and
heterogenous sources as in the case of the iGoogle™
interface. We propose to mix the two modalities (verbal and graphical) to build
systems with a reconfigurable interface, which is able to change with respect
to the particular application context. The result of this proposal is the
Graphical Artificial Intelligence Markup Language (GAIML) an extension of AIML
allowing merging both interaction modalities. In this context a suitable
chatbot system called Graphbot is presented to support this language. With this
language is possible to define personalized interface patterns that are the
most suitable ones in relation to the data types exchanged between the user and
the system according to the context of the dialogue.