scholarly journals Recognition of briefly presented familiar and unfamiliar faces

Psihologija ◽  
2009 ◽  
Vol 42 (1) ◽  
pp. 47-66 ◽  
Author(s):  
Bozana Veres-Injac ◽  
Malte Persike

Early processing stages in the perception of familiar and unfamiliar faces were studied in four experiments by varying the type of available facial information in a four alternative forced choice recognition task. Both reaction time and recognition accuracy served as dependent measures. The observed data revealed an asymmetry in processing familiar and unfamiliar faces. A markedly weak inversion effect and strong blurring effect suggest a limited usage of spatial relations within early processing stages of unfamiliar faces. Recognition performance for whole familiar faces did not deteriorate due to blurring or the presentation of isolated internal features, suggesting a low level of representation for featural properties of familiar faces. Based on the data we propose that recognition of familiar faces relies much more on spatial relations among features, particularly internal features, than on featural characteristics. In contrast, recognition of unfamiliar faces resorts mainly to featural information.

2021 ◽  
Author(s):  
James Daniel Dunn ◽  
Victor Perrone de Lima Varela ◽  
Victoria Ida Nicholls ◽  
Michaell Papinutto ◽  
David White ◽  
...  

People’s ability to recognize faces varies to a surprisingly large extent and these differences are hereditary. But cognitive and perceptual processing giving rise to these differences remain poorly understood. Here we compared visual sampling of 10 super-recognizers – individuals that achieve the highest levels of accuracy in face recognition tasks – to typical viewers. Participants were asked to learn, and later recognize, a set of unfamiliar faces while their gaze position was recorded. They viewed faces through ‘spotlight’ apertures varying in size, where the face on the screen was modified in real-time to constrict the visual information displayed to the participant around their gaze position. Higher recognition accuracy in super-recognizers was only observed when at least 36% of the face was visible. We also identified qualitative differences in their visual sampling that can explain their superior recognition accuracy: (1) less systematic focus on the eye region; (2) more fixations to the central region of faces; (3) greater visual exploration of faces in general. These differences were observed in both natural and spotlight viewing conditions, but were most apparent when learning faces and not during recognition. Critically, this suggests that superior recognition performance is founded on enhanced encoding of faces into memory rather than memory retention. Together, our results point to a process whereby super-recognizers construct a more robust memory trace by accumulating samples of complex visual information across successive eye movements.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1007
Author(s):  
Chi Xu ◽  
Yunkai Jiang ◽  
Jun Zhou ◽  
Yi Liu

Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task.


2005 ◽  
Vol 36 (3) ◽  
pp. 219-229 ◽  
Author(s):  
Peggy Nelson ◽  
Kathryn Kohnert ◽  
Sabina Sabur ◽  
Daniel Shaw

Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children’s on-task behavior during instructional activities with and without soundfield amplification. Study 2 measured the effects of noise (+10 dB signal-to-noise ratio) using an experimental English word recognition task. Results: Findings from Study 1 revealed no significant condition (pre/postamplification) or group differences in observations in on-task performance. Main findings from Study 2 were that word recognition performance declined significantly for both L2 and EO groups in the noise condition; however, the impact was disproportionately greater for the L2 group. Clinical Implications: Children learning in their L2 appear to be at a distinct disadvantage when listening in rooms with typical noise and reverberation. Speech-language pathologists and audiologists should collaborate to inform teachers, help reduce classroom noise, increase signal levels, and improve access to spoken language for L2 learners.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2021 ◽  
Vol 13 (10) ◽  
pp. 265
Author(s):  
Jie Chen ◽  
Bing Han ◽  
Xufeng Ma ◽  
Jian Zhang

Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jorge Oliveira ◽  
Marta Fernandes ◽  
Pedro J. Rosa ◽  
Pedro Gamito

Research on pupillometry provides an increasing evidence for associations between pupil activity and memory processing. The most consistent finding is related to an increase in pupil size for old items compared with novel items, suggesting that pupil activity is associated with the strength of memory signal. However, the time course of these changes is not completely known, specifically, when items are presented in a running recognition task maximizing interference by requiring the recognition of the most recent items from a sequence of old/new items. The sample comprised 42 healthy participants who performed a visual word recognition task under varying conditions of retention interval. Recognition responses were evaluated using behavioral variables for discrimination accuracy, reaction time, and confidence in recognition decisions. Pupil activity was recorded continuously during the entire experiment. The results suggest a decrease in recognition performance with increasing study-test retention interval. Pupil size decreased across retention intervals, while pupil old/new effects were found only for words recognized at the shortest retention interval. Pupillary responses consisted of a pronounced early pupil constriction at retrieval under longer study-test lags corresponding to weaker memory signals. However, the pupil size was also sensitive to the subjective feeling of familiarity as shown by pupil dilation to false alarms (new items judged as old). These results suggest that the pupil size is related not only to the strength of memory signal but also to subjective familiarity decisions in a continuous recognition memory paradigm.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Li Liu ◽  
Yunfeng Ji ◽  
Yun Gao ◽  
Zhenyu Ping ◽  
Liang Kuang ◽  
...  

Traffic accidents are easily caused by tired driving. If the fatigue state of the driver can be identified in time and a corresponding early warning can be provided, then the occurrence of traffic accidents could be avoided to a large extent. At present, the recognition of fatigue driving states is mostly based on recognition accuracy. Fatigue state is currently recognized by combining different features, such as facial expressions, electroencephalogram (EEG) signals, yawning, and the percentage of eyelid closure over the pupil over time (PERCLoS). The combination of these features increases the recognition time and lacks real-time performance. In addition, some features will increase error in the recognition result, such as yawning frequently with the onset of a cold or frequent blinking with dry eyes. On the premise of ensuring the recognition accuracy and improving the realistic feasibility and real-time recognition performance of fatigue driving states, a fast support vector machine (FSVM) algorithm based on EEGs and electrooculograms (EOGs) is proposed to recognize fatigue driving states. First, the collected EEG and EOG modal data are preprocessed. Second, multiple features are extracted from the preprocessed EEGs and EOGs. Finally, FSVM is used to classify and recognize the data features to obtain the recognition result of the fatigue state. Based on the recognition results, this paper designs a fatigue driving early warning system based on Internet of Things (IoT) technology. When the driver shows symptoms of fatigue, the system not only sends a warning signal to the driver but also informs other nearby vehicles using this system through IoT technology and manages the operation background.


2020 ◽  
Author(s):  
Volkan Nurdal ◽  
Graeme Fairchild ◽  
George Stothart

Introduction: The development of rapid and reliable neural measures of memory is an important goal of cognitive neuroscience research and clinical practice. Fast Periodic Visual Stimulation (FPVS) is a recently developed electroencephalography (EEG) method that involves presenting a mix of novel and previously-learnt stimuli at a fast rate. Recent work has shown that implicit recognition memory can be measured using FPVS, however the role of repetition priming remains unclear. Here, we attempted to separate out the effects of recognition memory and repetition priming by manipulating the degree of repetition of the stimuli to be remembered.Method: Twenty-two participants with a mean age of 20.8 (±4.3) yrs completed an FPVS-oddball paradigm with a varying number of repetitions of the oddball stimuli, ranging from repetition only (pure repetition) to no repetition (pure recognition). In addition to the EEG task, participants completed a behavioural recognition task and visual memory subtests from the Wechsler Memory Scale – 4th edition (WMS-IV). Results: An oddball memory response was observed in all four experimental conditions (pure repetition to pure recognition) compared to the control condition (no oddball stimuli). The oddball memory response was largest in the pure repetition condition and smaller, but still significant, in conditions with less/no oddball repetition (e.g. pure recognition). Behavioural recognition performance was at ceiling, suggesting that all images were encoded successfully. There was no correlation with either behavioural memory performance or WMS-IV scores, suggesting the FPVS-oddball paradigm captures different memory processes than behavioural measures.Conclusion: Repetition priming significantly modulates the FPVS recognition memory response, however recognition is still detectable even in the total absence of repetition priming. The FPVS-oddball paradigm could potentially be developed into an objective and easy-to-administer memory assessment tool.


Author(s):  
Mohammad Farhad Bulbul ◽  
Yunsheng Jiang ◽  
Jinwen Ma

The emerging cost-effective depth sensors have facilitated the action recognition task significantly. In this paper, the authors address the action recognition problem using depth video sequences combining three discriminative features. More specifically, the authors generate three Depth Motion Maps (DMMs) over the entire video sequence corresponding to the front, side, and top projection views. Contourlet-based Histogram of Oriented Gradients (CT-HOG), Local Binary Patterns (LBP), and Edge Oriented Histograms (EOH) are then computed from the DMMs. To merge these features, the authors consider decision-level fusion, where a soft decision-fusion rule, Logarithmic Opinion Pool (LOGP), is used to combine the classification outcomes from multiple classifiers each with an individual set of features. Experimental results on two datasets reveal that the fusion scheme achieves superior action recognition performance over the situations when using each feature individually.


Sign in / Sign up

Export Citation Format

Share Document