Action, Gesture & Sign language (AG&S)

Action, Gesture & Sign language (AG&S)

Olga Capirci


The AG&S research unit is dedicated to study the co-development of language and perceptuo-motor processes, conceiving language acquisition as semantically driven and embodied.

We aim to explore the concept of “language” in its full complexity, i.e., by considering the interaction of vocal and manual modalities in spoken languages as well as manual and non-manual features in signed languages.

The core idea is that human communication transcends the spoken medium, often exploiting embodied forms such as signs and gestures.

Given the presence of gestures across cultures and the existence of languages that are strongly based on overt actions (sign languages), the embodied nature of human communication is hardly questionable. The recent wealth of studies on embodied language has spurred us to reconsider many aspects of our research. In fact, while we have always implicitly considered human communication as a multimodal endeavor comprising not only speech, but actions and gestures/signs, the recent debate questioning how language can be considered embodied has led us to explicitly investigate the nature of the relationship between actions, gestures/signs and words within this new framework.

These specific linguistic and semiotic features are explored within different typologies of communication in vocal and signed languages (e.g. conversations, narrative texts, poetry) and tasks (e.g. naming, picture description) to highlight the distinctive properties of face-to-face human language. Data are analyzed by means of various technologies: Elan coding and annotation software, body worn-sensors, optical motion capture and robotic applications.


From Action to Language through Gesture

Prompted by the importance of reframing our pioneering findings on the relation between actions, gestures and language within embodied cognition, an ongoing study at our lab is currently attempting to measure fine-grained motoric characteristics of actions and gestures in children with typical development and in children with impairments affecting the communication domain (i.e. Autism Spectrum Disorders).

In order to achieve this goal we are endeavoring to fully exploit the potential of novel and low-cost technologies (i.e. body-worn sensors) to measure subtle kinematic characteristics of actions and gestures performed by children in out-of-the-lab ecological scenarios (Ricci et al., 2013; Ricci et al., 2014; Sparaci et al., 2013).

Within this study we are also attempting a new type of analysis of actions and gestures based on a shared taxonomy (grounded on previous research conducted on SL and gestures and described above), which is shedding new light on the strong links between object affordances, handshapes exploited by children (between 6 and 7 years of age) when performing actions with objects and handshapes used in action-gestures.

Concurrently, we are exploring applications using optical motion capture to conduct detailed quantitative analyses, as well as studies of interpersonal variation and consistency, within the domains of sign languages and co-speech gestures (in collaboration with Qualisys, Sweden).

The Gesture-Speech integrated system

The existence of a tight link between speech and gesture in both processes has led authors like Adam Kendon (2004) to speak of a speech-gesture ensemble and others like David McNeill (2005) to consider them as two aspects of the same underlying thought process: gesture is part of language, and language itself is considered a gesture-speech integrated system.

But how do children's gestures become organized into the adult speech-gesture system? Our studies aim to clarify this link relying on evidence from early language development: gesture and speech emerge at about the same time, refer to the same broad set of referents and serve similar communicative functions.

New studies, analyzing more advanced stages of children’s linguistic-communicative development, demonstrate that gestural productions do not decrease with the emergence of speech nor with its further development until school-age (and adult age); rather, gestures change in terms of types, function, and relations (semantic and temporal) with co-referential words. Furthermore, they are strictly dependent on the different contexts of observation (spontaneous play, naming tasks, narration).

Concurrently, we are exploring applications in robot language learning (in collaboration with Plymouth University, UK).

Iconicity in Sign Languages

Within embodied approaches to semantics, another currently highly debated aspect is related to the role of highly iconic structures in SLs. In the last years, growing attention has been devoted to the approach developed in France by the research team led by Christian Cuxac and based on extensive analysis of Langue des Signes Françes (LSF).

This approach has proposed that all SLs exploit a basic capacity that signers have in iconizing their perceptual/practical experience of the physical world. SLs, unlike verbal languages, are able to convey information not only by ‘telling’, but, most importantly, by ‘showing’, thereby leading to the production of Highly Iconic Structures (HIS) also termed Transfers. The latter are cognitive operations whereby signers transfer their conceptualization of the real world into the four-dimensional world of signed discourse (i.e. the three dimensions of space and the time dimension).

This perspective on SLs, alongside considerations by researchers on spoken languages, according to which depicting through hands, face, voice and entire body is a method of communication (Clark and Gerrig, 1990; Clark, 2016), opens new possibilities for bridging studies analyzing signs and research on gestures (Kendon, 2014).

The study of LIS structures focuses on Highly Iconic Structures (HIS), marked by specific manual and non-manual articulators, most notably gaze patterns, where linguistic information is arranged in space and time in a simultaneous, multi-linear fashion that has no parallel in verbal language. The aim is to analyze formal features that appear to be influenced by the visual-gestural modality, hence differentiate such aspects from functionally comparable forms in verbal languages.

From Gesture to Sign language

We are also attempting to compare, in ongoing studies, sign production in deaf toddlers and adults acquiring LIS from birth with co-verbal gestures produced by hearing children and adults acquiring spoken Italian using the same tasks (i.e. PiNG and narrative) and a common procedure to transcribe, annotate and code, spoken, signed and gestural data relying on Elan software (i.e. analyzing both in relation to motoric execution parameters, function and representation techniques).

Preliminar results provide a unified methodology to analyze gestures and signs and give scholars the possibility of investigating the similarities between the iconic principles on which their representation of reality and the internal structure of their units are based. The representational techniques used in both systems (gestural and signed) are related to some extent by virtue of the shared manual modality. In sign languages as well in co-verbal gestures HIS are the representational techniques more frequently used.

Even though highly iconic, these elements cannot be left aside from linguistics merely because we are not used to include them in the ‘typical’ language structure (Antinoro Pizzuto et al., 2010). According to Adam Kendon (2014), by studying the visible actions of speakers and signers, we will be able to develop a complete approach to ‘language’ as a form of action.

We are also interested in studying these phenomena using motion capture, which affords precise analyses on the kinematics of gesture and sign.


An appropriate written representations of LIS is crucial to conduct all the studies mentioned above. A writing system devised for sign language (“SignWriting”, Sutton, 1995) is being adapted to LIS, and is currently used by deaf signers and researchers for the purpose of transcribing LIS as well as composing written LIS texts. Results have led to significant advancement in the linguistic analysis of sign language structure and in developing the underlying theoretical framework, and have also lead to relevant improvements in deaf signers’ meta-linguistic abilities.