The secret multimodal life of IREs: Looking more closely at representational gestures in a familiar questioning sequence

2021 ◽  
Vol 63 ◽  
pp. 100913
Author(s):  
Virginia J. Flood
Gesture ◽  
2020 ◽  
Vol 19 (2-3) ◽  
pp. 299-334
Author(s):  
Arianna Bello ◽  
Silvia Stefanini ◽  
Pasquale Rinaldi ◽  
Daniela Onofrio ◽  
Virginia Volterra

Abstract In early communicative development, children with Down syndrome (DS) make extensive use of gestures to compensate for articulatory difficulties. Here, we analyzed the symbolic strategies that underlie this gesture production, compared to that used by typically developing children. Using the same picture-naming task, 79 representational gestures produced by 10 children with DS and 42 representational gestures produced by 10 typically developing children of comparable developmental age (3;1 vs. 2;9, respectively) were collected. The gestures were analyzed and classified according to four symbolic strategies. The two groups performed all of the strategies, with no significant differences for either choice or frequency of the strategies used. The item analysis highlighted that some photographs tended to elicit the use of the same strategy in both groups. These results indicate that similar symbolic strategies are active in children with DS as in typically developing children, which suggests interesting similarities in their symbolic development.


Gesture ◽  
2007 ◽  
Vol 7 (1) ◽  
pp. 73-95 ◽  
Author(s):  
Autumn B. Hostetter ◽  
Martha W. Alibali

Individuals differ greatly in how often they gesture when they speak. This study investigated relations between speakers’ verbal and spatial skills and their gesture rates. Two types of verbal skill were measured: semantic fluency, which is thought to index efficiency with lexical access, and phonemic fluency, which is thought to index efficiency with organizing the lexicon in novel ways. Spatial skill was measured with a visualization task. We hypothesized that individuals with low verbal skill but high spatial visualization skill would gesture most often, due to having mental images not closely linked to verbal forms. This hypothesis was supported for phonemic fluency, but not for semantic fluency. We also found that individuals with low phonemic fluency and individuals with high phonemic fluency produced representational gestures at higher rates than individuals with average phonemic fluency. The findings indicate that individual differences in gesture production are associated with individual differences in cognitive skills.


2019 ◽  
Author(s):  
Gabriella Vigliocco ◽  
Yasamin Motamedi ◽  
Margherita Murgiano ◽  
Elizabeth Wonnacott ◽  
Chloë Marshall ◽  
...  

Most research on how children learn the mapping between words and world has assumed that language is arbitrary, and has investigated language learning in contexts in which objects referred to are present in the environment. Here, we report analyses of a semi-naturalistic corpus of caregivers talking to their 2-3 year-old. We focus on caregivers’ use of non-arbitrary cues across different expressive channels: both iconic (onomatopoeia and representational gestures) and indexical (points and actions with objects). We ask if these cues are used differently when talking about objects known or unknown to the child, and when the referred objects are present or absent. We hypothesize that caregivers would use these cues more often with objects novel to the child. Moreover, they would use the iconic cues especially when objects are absent because iconic cues bring to the mind’s eye properties of referents. We find that cue distribution differs: all cues except points are more common for unknown objects indicating their potential role in learning; onomatopoeia and representational gestures are more common for displaced contexts whereas indexical cues are more common when objects are present. Thus, caregivers provide multimodal non-arbitrary cues to support children’s vocabulary learning and iconicity – specifically – can support linking mental representations for objects and labels.


Author(s):  
Audrey Mazur-Palandre ◽  
Kristine Lund

In this study we analyzed the verbal and gestural behavior of 6-year-old French children during free dialogue explanation. We expected that children would alter their gestures according to content explained and interlocutor visibility. Thirty children explained two games to a peer: fifteen could not see their interlocutor whereas fifteen could. Results showed that the mean number of clauses per explanation of a child speaker interacting with a child addressee did not significantly change according to content explained or the visibility of the addressee. However, the mean number of gestures per clause is higher for a child speaker interacting with a child addressee both when explaining the spatial game and when face-to-face. Finally, children explaining how to play a spatial game produced more representational gestures and made more interactive gestures when the addressee was visible. These results are discussed in relation to previous studies on children’s language production and acquisition.


Gesture ◽  
2018 ◽  
Vol 17 (1) ◽  
pp. 65-97
Author(s):  
Prakaiwan Vajrabhaya ◽  
Eric Pederson

Abstract Using a repetition paradigm, in which speakers describe the same event to a sequence of listeners, we analyze the degree of reduction in representational gestures. We find that when listener feedback, both verbal and non-verbal, is minimal and unvarying, speakers steadily reduce their motoric commitment in repeated gestures across tellings without regard to the novelty of the information to the listener. Within this specific condition, we interpret the result to coincide with the view that gestures primarily serve as a part of speech production rather than a communicative act. Importantly, we propose that gestural sensitivity to the listener derives from an interaction between interlocutors, rather than simple modeling of the listener’s state of knowledge in the mind of the speaker alone.


Sign in / Sign up

Export Citation Format

Share Document