scholarly journals Intrinsic and extrinsic influences on self-recognition of actions

2021 ◽  
Author(s):  
Akila Kadambi ◽  
Qi Xie ◽  
Hongjing Lu

Despite minimal visual experience and unfamiliar third-person viewpoints, humans are able to recognize their own body movements even when actions are reduced to point-light displays. What factors influence visual self-recognition of own actions? To address this question, we recorded whole-body movements of a large sample of participants (N = 101) performing a range of actions. After a delay period, participants were tested in a self-recognition task: identifying own actions depicted in point-light displays amongst three other point-light actors performing identical actions. While participants showed above-chance accuracy on average for self-recognition, we found substantial differences in performance across actions and individuals. Self-recognition performance was modulated by interactions between extrinsic factors (associated with the degree of motor planning in performed actions) and intrinsic traits linked to individuals’ motor imagery ability and sensorimotor self-processing ability (autism and schizotypal traits). These interactions shed light on mechanistic possibilities for how the motor system may augment vision to construct the core of self-awareness.

2011 ◽  
Vol 279 (1729) ◽  
pp. 669-674 ◽  
Author(s):  
Richard Cook ◽  
Alan Johnston ◽  
Cecilia Heyes

When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Emmanuelle Bellot ◽  
Antoine Garnier-Crussard ◽  
Elodie Pongan ◽  
Floriane Delphin-Combe ◽  
Marie-Hélène Coste ◽  
...  

AbstractSome of the behavioral disorders observed in Parkinson’s disease (PD) may be related to an altered processing of social messages, including emotional expressions. Emotions conveyed by whole body movements may be difficult to generate and be detected by PD patients. The aim of the present study was to compare valence judgments of emotional whole body expressions in individuals with PD and in healthy controls matched for age, gender and education. Twenty-eight participants (13 PD patients and 15 healthy matched control participants) were asked to rate the emotional valence of short movies depicting emotional interactions between two human characters presented with the “Point Light Displays” technique. To ensure understanding of the perceived scene, participants were asked to briefly describe each of the evaluated movies. Patients’ emotional valence evaluations were less intense than those of controls for both positive (p < 0.001) and negative (p < 0.001) emotional expressions, even though patients were able to correctly describe the depicted scene. Our results extend the previously observed impaired processing of emotional facial expressions to impaired processing of emotions expressed by body language. This study may support the hypothesis that PD affects the embodied simulation of emotional expression and the potentially involved mirror neuron system.


2020 ◽  
Vol 20 (11) ◽  
pp. 1719
Author(s):  
Akila Kadambi ◽  
Gennady Erlikhman ◽  
Martin Monti ◽  
Hongjing Lu

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1007
Author(s):  
Chi Xu ◽  
Yunkai Jiang ◽  
Jun Zhou ◽  
Yi Liu

Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task.


2005 ◽  
Vol 36 (3) ◽  
pp. 219-229 ◽  
Author(s):  
Peggy Nelson ◽  
Kathryn Kohnert ◽  
Sabina Sabur ◽  
Daniel Shaw

Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children’s on-task behavior during instructional activities with and without soundfield amplification. Study 2 measured the effects of noise (+10 dB signal-to-noise ratio) using an experimental English word recognition task. Results: Findings from Study 1 revealed no significant condition (pre/postamplification) or group differences in observations in on-task performance. Main findings from Study 2 were that word recognition performance declined significantly for both L2 and EO groups in the noise condition; however, the impact was disproportionately greater for the L2 group. Clinical Implications: Children learning in their L2 appear to be at a distinct disadvantage when listening in rooms with typical noise and reverberation. Speech-language pathologists and audiologists should collaborate to inform teachers, help reduce classroom noise, increase signal levels, and improve access to spoken language for L2 learners.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jorge Oliveira ◽  
Marta Fernandes ◽  
Pedro J. Rosa ◽  
Pedro Gamito

Research on pupillometry provides an increasing evidence for associations between pupil activity and memory processing. The most consistent finding is related to an increase in pupil size for old items compared with novel items, suggesting that pupil activity is associated with the strength of memory signal. However, the time course of these changes is not completely known, specifically, when items are presented in a running recognition task maximizing interference by requiring the recognition of the most recent items from a sequence of old/new items. The sample comprised 42 healthy participants who performed a visual word recognition task under varying conditions of retention interval. Recognition responses were evaluated using behavioral variables for discrimination accuracy, reaction time, and confidence in recognition decisions. Pupil activity was recorded continuously during the entire experiment. The results suggest a decrease in recognition performance with increasing study-test retention interval. Pupil size decreased across retention intervals, while pupil old/new effects were found only for words recognized at the shortest retention interval. Pupillary responses consisted of a pronounced early pupil constriction at retrieval under longer study-test lags corresponding to weaker memory signals. However, the pupil size was also sensitive to the subjective feeling of familiarity as shown by pupil dilation to false alarms (new items judged as old). These results suggest that the pupil size is related not only to the strength of memory signal but also to subjective familiarity decisions in a continuous recognition memory paradigm.


Sign in / Sign up

Export Citation Format

Share Document