Recognizing Characters and Relationships from Videos via Spatial-Temporal and Multimodal Cues

Author(s):  
Chenyu Cao ◽  
Chenghao Yan ◽  
Fangtao Li ◽  
Zihe Liu ◽  
Zheng Wang ◽  
...  
Keyword(s):  
2014 ◽  
Author(s):  
Meghan Armstrong ◽  
Núria Esteve-Gibert ◽  
Pilar Prieto
Keyword(s):  

Author(s):  
Matthias Kraus ◽  
Marvin Schiller ◽  
Gregor Behnke ◽  
Pascal Bercher ◽  
Michael Dorna ◽  
...  
Keyword(s):  

2019 ◽  
Author(s):  
Gabriella Vigliocco ◽  
Yasamin Motamedi ◽  
Margherita Murgiano ◽  
Elizabeth Wonnacott ◽  
Chloë Marshall ◽  
...  

Most research on how children learn the mapping between words and world has assumed that language is arbitrary, and has investigated language learning in contexts in which objects referred to are present in the environment. Here, we report analyses of a semi-naturalistic corpus of caregivers talking to their 2-3 year-old. We focus on caregivers’ use of non-arbitrary cues across different expressive channels: both iconic (onomatopoeia and representational gestures) and indexical (points and actions with objects). We ask if these cues are used differently when talking about objects known or unknown to the child, and when the referred objects are present or absent. We hypothesize that caregivers would use these cues more often with objects novel to the child. Moreover, they would use the iconic cues especially when objects are absent because iconic cues bring to the mind’s eye properties of referents. We find that cue distribution differs: all cues except points are more common for unknown objects indicating their potential role in learning; onomatopoeia and representational gestures are more common for displaced contexts whereas indexical cues are more common when objects are present. Thus, caregivers provide multimodal non-arbitrary cues to support children’s vocabulary learning and iconicity – specifically – can support linking mental representations for objects and labels.


2018 ◽  
Vol 75 ◽  
pp. 1-10 ◽  
Author(s):  
Filip Malawski ◽  
Bogdan Kwolek

Sign in / Sign up

Export Citation Format

Share Document