representational gestures
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 10)

H-INDEX

16
(FIVE YEARS 1)

Author(s):  
Bojana Romic

AbstractThe central interest of this paper is the anthropomorphic social robot Ai-Da (Aidan Meller Gallery/Oxford University), perceived as an actor in the interplay of cultural and representational gestures. These gestures determine how this robot is presented—that is, how its activities are articulated, interpreted and promoted. This paper criticises the use of a transhistorical discourse in the presentational strategies around this robot, since this discourse reinforces the so-called “myth of a machine”. The discussion focuses on the individuation and embodiment of this drawing robot. It is argued that the choice to provide Ai-Da with an evocative silicone face, coupled with an anthropomorphic body, is a socio-political decision that shapes public imaginaries about social robots in general.


2021 ◽  
Author(s):  
Monika Molnar ◽  
Kai Ian Leung ◽  
Jodee Santos Herrera ◽  
Marcel Giezen

Aims and ObjectivesThis study was designed to assess whether bilingual caregivers, compared to monolingual caregivers, modify their nonverbal gestures to match the increased communicative and/or cognitive-linguistic demands of bilingual language contexts - as would be predicted based on the Facilitative Strategy Hypothesis.MethodologyWe recorded the rate of representational and beat gestures in monolingual and bilingual caregivers when they retold a cartoon story to their child or to an adult, in a monolingual and a bilingual context (‘synonym’ context for monolingual caregivers).Data and AnalysisWe calculated the frequency of all gestures, representational gestures, and beat gestures for each addressee (adult-directed vs. toddler-directed) and linguistic context (monolingual vs. bilingual/synonym), separately for the monolingual and the bilingual caregivers. Using ANOVA, we contrasted monolingual vs. bilingual caregivers’ gesture frequency for each gesture type separately - based on addressee and linguistic context. Findings/ConclusionsBilingual caregivers gesture more than monolingual caregivers, irrespective of addressee and language context. Furthermore, we found evidence in support of the Facilitative Strategy hypothesis across both monolingual and bilingual caregivers, as all caregivers increased the rate of their representational gestures in the child-directed re-telling. However, we found no clear patterns showing that bilingual caregivers, compared to monolingual caregivers, adjust their gestures when the communication demands from their child’s perspective are presumably high (i.e., the child is listening to a story in two languages). In summary, both monolingual and bilingual caregivers similarly adjust their gestures to aid their child’s comprehension, and bilinguals generally gesture more than monolinguals.OriginalityTo our knowledge, this is the first study of gesture use in child-directed communication in monolingual and bilingual caregivers.Significance/ImplicationsIndependent of their monolingual or bilingual status, caregivers adjust their child-directed multimodal communication strategies (specifically gestures) when interacting with their children.


Gesture ◽  
2020 ◽  
Vol 19 (2-3) ◽  
pp. 299-334
Author(s):  
Arianna Bello ◽  
Silvia Stefanini ◽  
Pasquale Rinaldi ◽  
Daniela Onofrio ◽  
Virginia Volterra

Abstract In early communicative development, children with Down syndrome (DS) make extensive use of gestures to compensate for articulatory difficulties. Here, we analyzed the symbolic strategies that underlie this gesture production, compared to that used by typically developing children. Using the same picture-naming task, 79 representational gestures produced by 10 children with DS and 42 representational gestures produced by 10 typically developing children of comparable developmental age (3;1 vs. 2;9, respectively) were collected. The gestures were analyzed and classified according to four symbolic strategies. The two groups performed all of the strategies, with no significant differences for either choice or frequency of the strategies used. The item analysis highlighted that some photographs tended to elicit the use of the same strategy in both groups. These results indicate that similar symbolic strategies are active in children with DS as in typically developing children, which suggests interesting similarities in their symbolic development.


Infancy ◽  
2020 ◽  
Vol 26 (1) ◽  
pp. 104-122
Author(s):  
Eva Murillo ◽  
Marta Casla

2020 ◽  
Vol 74 (1) ◽  
pp. 29-44 ◽  
Author(s):  
Burcu Arslan ◽  
Tilbe Göksun

Ageing has effects both on language and gestural communication skills. Although gesture use is similar between younger and older adults, the use of representational gestures (e.g., drawing a line with fingers on the air to indicate a road) decreases with age. This study investigates whether this change in the production of representational gestures is related to individuals’ working memory and/or mental imagery skills. We used three gesture tasks (daily activity description, story completion, and address description) to obtain spontaneous co-speech gestures from younger and older individuals ( N = 60). Participants also completed the Corsi working memory task and a mental imagery task. Results showed that although the two age groups’ overall gesture frequencies were similar across the three tasks, the younger adults used relatively higher proportions of representational gestures than the older adults only in the address description task. Regardless of age, the mental imagery but not working memory score was associated with the use of representational gestures only in this task. However, the use of spatial words in the address description task did not differ between the two age groups. The mental imagery or working memory scores did not associate with the spatial word use. These findings suggest that mental imagery can play a role in gesture production. Gesture and speech production might have separate timelines in terms of being affected by the ageing process, particularly for spatial content.


2020 ◽  
Author(s):  
Marlijn ter Bekke ◽  
Linda Drijvers ◽  
JUDITH HOLLER

In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation. In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of. Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages.


2019 ◽  
Author(s):  
Gabriella Vigliocco ◽  
Yasamin Motamedi ◽  
Margherita Murgiano ◽  
Elizabeth Wonnacott ◽  
Chloë Marshall ◽  
...  

Most research on how children learn the mapping between words and world has assumed that language is arbitrary, and has investigated language learning in contexts in which objects referred to are present in the environment. Here, we report analyses of a semi-naturalistic corpus of caregivers talking to their 2-3 year-old. We focus on caregivers’ use of non-arbitrary cues across different expressive channels: both iconic (onomatopoeia and representational gestures) and indexical (points and actions with objects). We ask if these cues are used differently when talking about objects known or unknown to the child, and when the referred objects are present or absent. We hypothesize that caregivers would use these cues more often with objects novel to the child. Moreover, they would use the iconic cues especially when objects are absent because iconic cues bring to the mind’s eye properties of referents. We find that cue distribution differs: all cues except points are more common for unknown objects indicating their potential role in learning; onomatopoeia and representational gestures are more common for displaced contexts whereas indexical cues are more common when objects are present. Thus, caregivers provide multimodal non-arbitrary cues to support children’s vocabulary learning and iconicity – specifically – can support linking mental representations for objects and labels.


2019 ◽  
Vol 84 (7) ◽  
pp. 1897-1911 ◽  
Author(s):  
James P. Trujillo ◽  
Irina Simanova ◽  
Harold Bekkering ◽  
Asli Özyürek

AbstractHumans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.


Sign in / Sign up

Export Citation Format

Share Document