scholarly journals The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism

Author(s):  
Björn Schuller ◽  
Stefan Steidl ◽  
Anton Batliner ◽  
Alessandro Vinciarelli ◽  
Klaus Scherer ◽  
...  
2020 ◽  
Author(s):  
Abdulaziz Abubshait ◽  
Patrick P. Weis ◽  
Eva Wiese

Social signals, such as changes in gaze direction, are essential cues to predict others’ mental states and behaviors (i.e., mentalizing). Studies show that humans can mentalize with non-human agents when they perceive a mind in them (i.e., mind perception). Robots that physically and/or behaviorally resemble humans likely trigger mind perception, which enhances the relevance of social cues and improves social-cognitive performance. The current ex-periments examine whether the effect of physical and behavioral influencers of mind perception on social-cognitive processing is modulated by the lifelikeness of a social interaction. Participants interacted with robots of varying degrees of physical (humanlike vs. robot-like) and behavioral (reliable vs. random) human-likeness while the lifelikeness of a social attention task was manipulated across five experiments. The first four experiments manipulated lifelikeness via the physical realism of the robot images (Study 1 and 2), the biological plausibility of the social signals (Study 3), and the plausibility of the social con-text (Study 4). They showed that humanlike behavior affected social attention whereas appearance affected mind perception ratings. However, when the lifelikeness of the interaction was increased by using videos of a human and a robot sending the social cues in a realistic environment (Study 5), social attention mechanisms were affected both by physical appearance and behavioral features, while mind perception ratings were mainly affected by physical appearance. This indicates that in order to understand the effect of physical and behavioral features on social cognition, paradigms should be used that adequately simulate the lifelikeness of social interactions.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 195
Author(s):  
Adrian Sergiu Darabant ◽  
Diana Borza ◽  
Radu Danescu

The human face holds a privileged position in multi-disciplinary research as it conveys much information—demographical attributes (age, race, gender, ethnicity), social signals, emotion expression, and so forth. Studies have shown that due to the distribution of ethnicity/race in training datasets, biometric algorithms suffer from “cross race effect”—their performance is better on subjects closer to the “country of origin” of the algorithm. The contributions of this paper are two-fold: (a) first, we gathered, annotated and made public a large-scale database of (over 175,000) facial images by automatically crawling the Internet for celebrities’ images belonging to various ethnicity/races, and (b) we trained and compared four state of the art convolutional neural networks on the problem of race and ethnicity classification. To the best of our knowledge, this is the largest, data-balanced, publicly-available face database annotated with race and ethnicity information. We also studied the impact of various face traits and image characteristics on the race/ethnicity deep learning classification methods and compared the obtained results with the ones extracted from psychological studies and anthropomorphic studies. Extensive tests were performed in order to determine the facial features to which the networks are sensitive to. These tests and a recognition rate of 96.64% on the problem of human race classification demonstrate the effectiveness of the proposed solution.


2017 ◽  
Vol 21 (11) ◽  
pp. 864-877 ◽  
Author(s):  
Jared Martin ◽  
Magdalena Rychlowska ◽  
Adrienne Wood ◽  
Paula Niedenthal
Keyword(s):  

2016 ◽  
Vol 17 (9) ◽  
pp. 2613-2626 ◽  
Author(s):  
Kun He ◽  
Zhongzhi Xu ◽  
Pu Wang ◽  
Lianbo Deng ◽  
Lai Tu

2013 ◽  
Vol 25 (4) ◽  
pp. 547-557 ◽  
Author(s):  
Maital Neta ◽  
William M. Kelley ◽  
Paul J. Whalen

Extant research has examined the process of decision making under uncertainty, specifically in situations of ambiguity. However, much of this work has been conducted in the context of semantic and low-level visual processing. An open question is whether ambiguity in social signals (e.g., emotional facial expressions) is processed similarly or whether a unique set of processors come on-line to resolve ambiguity in a social context. Our work has examined ambiguity using surprised facial expressions, as they have predicted both positive and negative outcomes in the past. Specifically, whereas some people tended to interpret surprise as negatively valenced, others tended toward a more positive interpretation. Here, we examined neural responses to social ambiguity using faces (surprise) and nonface emotional scenes (International Affective Picture System). Moreover, we examined whether these effects are specific to ambiguity resolution (i.e., judgments about the ambiguity) or whether similar effects would be demonstrated for incidental judgments (e.g., nonvalence judgments about ambiguously valenced stimuli). We found that a distinct task control (i.e., cingulo-opercular) network was more active when resolving ambiguity. We also found that activity in the ventral amygdala was greater to faces and scenes that were rated explicitly along the dimension of valence, consistent with findings that the ventral amygdala tracks valence. Taken together, there is a complex neural architecture that supports decision making in the presence of ambiguity: (a) a core set of cortical structures engaged for explicit ambiguity processing across stimulus boundaries and (b) other dedicated circuits for biologically relevant learning situations involving faces.


Sign in / Sign up

Export Citation Format

Share Document