Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems. In this paper we present an event-based approach for integrating a conversational HRI system. This approach has been instantiated using the Urbi middleware  on a Nao robot, used as a testbed for investigating child-robot interaction in the ALIZ-E project. We focus on the implementation for two scenarios: an imitation game of arm movements and a quiz game.
An Event-Based Conversational System for the Nao Robot
Contributo in volume
Springer Science+Business Media, New York, USA
Proceedings of the Paralinguistic Information and its Integration in Spoken Dialogue Systems Workshop, edited by Ramón López-Cózar Delgado, Tetsunori Kobayashi, pp. 125–132. New York: Springer Science+Business Media, 2011