multimodal cues
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 24)

H-INDEX

12
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Michael Joe Munyua Gachomba ◽  
Joan Esteve-Agraz ◽  
Kevin Caref ◽  
Aroa Sanz-Maroto ◽  
Helena Bortolozzo-Gleich ◽  
...  

Animals often display prosocial behaviours, performing actions that benefit others. Although prosociality is essential for social bonding and cooperation, we still know very little about how animals integrate behavioural cues from those in need to make decisions that increase their wellbeing. To address this question, we used a two-choice task where rats can provide rewards to a conspecific in the absence of self-benefit, and interrogated which conditions promote prosociality by manipulating the social context of the interacting animals. While sex or degree of familiarity did not affect prosocial choices in rats, social hierarchy revealed to be a potent modulator, with dominant decision-makers showing faster emergence and higher levels of prosocial choices towards their submissive cage-mates. Leveraging quantitative analysis of multimodal social dynamics prior to choice, we identified that pairs with dominant decision-makers exhibited more proximal interactions in social distance. Interestingly, these more coordinated interactions were driven by submissive animals that modulated their position and movement towards their dominants and increased 50kHz vocalisation rate when their partners were going to behave selfishly. This display of multimodal cues by submissive animals while signalling need promoted social saliency and a faster emergence of prosocial choices from dominant rats. Multivariate analysis highlighted non-canonical body language as the main information dominants use on a trial-by-trial basis to learn that their actions have effects on others. Our results provide a refined understanding of the behavioural dynamics that rats use for action-selection upon perception of socially relevant cues and navigate social decision-making.


2021 ◽  
pp. 1-26
Author(s):  
Eva CASTILLO ◽  
Mariia PRONINA ◽  
Iris HÜBSCHER ◽  
Pilar PRIETO

Abstract Over recent decades much research has analyzed the relevance of 9- to 20- month-old infants’ early imitation skills (object- and language-based imitation) for language development. Yet there have been few systematic comparisons of the joint relevance of these imitative behaviors later on in development. This correlational study investigated whether multimodal imitation (gestural, prosodic, and lexical components) and object-based imitation are related to narratives and sociopragmatics in preschoolers. Thirty-one typically developing 3- to 4-year-old children performed four tasks to assess multimodal imitation, object-based imitation, narrative abilities, and sociopragmatic abilities. Results revealed that both narrative and sociopragmatic skills were significantly related to multimodal imitation, but not to object-based imitation, indicating that preschoolers’ ability to imitate socially relevant multimodal cues is strongly related to language and sociocommunicative skills. Therefore, this evidence supports a broader conceptualization of imitation behaviors in the field of language development that systematically integrates prosodic, gestural, and verbal linguistic patterns.


2021 ◽  
Author(s):  
Chenyu Cao ◽  
Chenghao Yan ◽  
Fangtao Li ◽  
Zihe Liu ◽  
Zheng Wang ◽  
...  
Keyword(s):  

System ◽  
2021 ◽  
pp. 102691
Author(s):  
Vivien Lin ◽  
Hui-Chin Yeh ◽  
Huai-Hsuan Huang ◽  
Nian-Shing Chen

2021 ◽  
Vol 288 (1955) ◽  
pp. 20210500
Author(s):  
Ye Zhang ◽  
Diego Frassinelli ◽  
Jyrki Tuomainen ◽  
Jeremy I. Skipper ◽  
Gabriella Vigliocco

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.


Author(s):  
Lu Zhang ◽  
Jian Zhang ◽  
Jialie Shen ◽  
Jingsong Xu ◽  
Zhibin Li ◽  
...  
Keyword(s):  

2021 ◽  
pp. 104426
Author(s):  
Mahandran Valliyappan ◽  
Murugan Chinnaperamanoor Madhappan ◽  
Gang Wang ◽  
Jin Chen ◽  
Thiruchenthil Nathan Parthasarathy

2021 ◽  
pp. 468-479
Author(s):  
Shimeng Peng ◽  
Shigeki Ohira ◽  
Katashi Nagao

2020 ◽  
Vol 11 ◽  
Author(s):  
Jennifer Hinnell ◽  
Fey Parrill

When faced with an ambiguous pronoun, comprehenders use both multimodal cues (e.g., gestures) and linguistic cues to identify the antecedent. While research has shown that gestures facilitate language comprehension, improve reference tracking, and influence the interpretation of ambiguous pronouns, literature on reference resolution suggests that a wide set of linguistic constraints influences the successful resolution of ambiguous pronouns and that linguistic cues are more powerful than some multimodal cues. To address the outstanding question of the importance of gesture as a cue in reference resolution relative to cues in the speech signal, we have previously investigated the comprehension of contrastive gestures that indexed abstract referents – in this case expressions of personal preference – and found that such gestures did facilitate the resolution of ambiguous statements of preference. In this study, we extend this work to investigate whether the effect of gesture on resolution is diminished when the gesture indexes a statement that is less likely to be interpreted as the correct referent. Participants watched videos in which a speaker contrasted two ideas that were either neutral (e.g., whether to take the train to a ballgame or drive) or moral (e.g., human cloning is (un)acceptable). A gesture to the left or right side co-occurred with speech expressing each position. In gesture-disambiguating trials, an ambiguous phrase (e.g., I agree with that, where that is ambiguous) was accompanied by a gesture to one side or the other. In gesture non-disambiguating trials, no third gesture occurred with the ambiguous phrase. Participants were more likely to choose the idea accompanied by gesture as the stimulus speaker’s preference. We found no effect of scenario type. Regardless of whether the linguistic cue expressed a view that was morally charged or neutral, observers used gesture to understand the speaker’s opinion. This finding contributes to our understanding of the strength and range of cues, both linguistic and multimodal, that listeners use to resolve ambiguous references.


Sign in / Sign up

Export Citation Format

Share Document