scholarly journals Estimating Posture-Recognition Performance in Sensing Garments Using Geometric Wrinkle Modeling

2010 ◽  
Vol 14 (6) ◽  
pp. 1436-1445 ◽  
Author(s):  
Holger Harms ◽  
Oliver Amft ◽  
Gerhard Troster
Author(s):  
Nayra A. Martin-Key ◽  
Erich W. Graf ◽  
Wendy J. Adams ◽  
Graeme Fairchild

AbstractAdolescents with Conduct Disorder (CD) show deficits in recognizing facial expressions of emotion, but it is not known whether these difficulties extend to other social cues, such as emotional body postures. Moreover, in the absence of eye-tracking data, it is not known whether such deficits, if present, are due to a failure to attend to emotionally informative regions of the body. Male and female adolescents with CD and varying levels of callous-unemotional (CU) traits (n = 45) and age- and sex-matched typically-developing controls (n = 51) categorized static and dynamic emotional body postures. The emotion categorization task was paired with eye-tracking methods to investigate relationships between fixation behavior and recognition performance. Having CD was associated with impaired recognition of static and dynamic body postures and atypical fixation behavior. Furthermore, males were less likely to fixate emotionally-informative regions of the body than females. While we found no effects of CU traits on body posture recognition, the effects of CU traits on fixation behavior varied according to CD status and sex, with CD males with lower levels of CU traits showing the most atypical fixation behavior. Critically, atypical fixation behavior did not explain the body posture recognition deficits observed in CD. Our findings suggest that CD-related impairments in recognition of body postures of emotion are not due to attentional issues. Training programmes designed to ameliorate the emotion recognition difficulties associated with CD may need to incorporate a body posture component.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xian Yu ◽  
Bo Xiao ◽  
Ye Tian ◽  
Zihao Wu ◽  
Qi Liu ◽  
...  

At present, the study of upper-limb posture recognition is still in the primary stage; due to the diversity of the objective environment and the complexity of the human body posture, the upper-limb posture has no public dataset. In this paper, an upper extremity data acquisition system is designed, with a three-channel data acquisition mode, collect acceleration signal, and gyroscope signal as sample data. The datasets were preprocessed with deweighting, interpolation, and feature extraction. With the goal of recognizing human posture, experiments with KNN, logistic regression, and random gradient descent algorithms were conducted. In order to verify the superiority of each algorithm, the data window was adjusted to compare the recognition speed, computation time, and accuracy of each classifier. For the problem of improving the accuracy of human posture recognition, a neural network model based on full connectivity is developed. In addition, this paper proposes a finite state machine- (FSM-) based FES control model for controlling the upper limb to perform a range of functional tasks. In the process of constructing the network model, the effects of different hidden layers, activation functions, and optimizers on the recognition rate were experimental for the comparative analysis; the softplus activation function with better recognition performance and the adagrad optimizer are selected. Finally, by comparing the comprehensive recognition accuracy and time efficiency with other classification models, the fully connected neural network is verified in the human posture superiority in identification.


1991 ◽  
Vol 34 (2) ◽  
pp. 415-426 ◽  
Author(s):  
Richard L. Freyman ◽  
G. Patrick Nerbonne ◽  
Heather A. Cote

This investigation examined the degree to which modification of the consonant-vowel (C-V) intensity ratio affected consonant recognition under conditions in which listeners were forced to rely more heavily on waveform envelope cues than on spectral cues. The stimuli were 22 vowel-consonant-vowel utterances, which had been mixed at six different signal-to-noise ratios with white noise that had been modulated by the speech waveform envelope. The resulting waveforms preserved the gross speech envelope shape, but spectral cues were limited by the white-noise masking. In a second stimulus set, the consonant portion of each utterance was amplified by 10 dB. Sixteen subjects with normal hearing listened to the unmodified stimuli, and 16 listened to the amplified-consonant stimuli. Recognition performance was reduced in the amplified-consonant condition for some consonants, presumably because waveform envelope cues had been distorted. However, for other consonants, especially the voiced stops, consonant amplification improved recognition. Patterns of errors were altered for several consonant groups, including some that showed only small changes in recognition scores. The results indicate that when spectral cues are compromised, nonlinear amplification can alter waveform envelope cues for consonant recognition.


2018 ◽  
Vol 1 (2) ◽  
pp. 34-44
Author(s):  
Faris E Mohammed ◽  
Dr. Eman M ALdaidamony ◽  
Prof. A. M Raid

Individual identification process is a very significant process that resides a large portion of day by day usages. Identification process is appropriate in work place, private zones, banks …etc. Individuals are rich subject having many characteristics that can be used for recognition purpose such as finger vein, iris, face …etc. Finger vein and iris key-points are considered as one of the most talented biometric authentication techniques for its security and convenience. SIFT is new and talented technique for pattern recognition. However, some shortages exist in many related techniques, such as difficulty of feature loss, feature key extraction, and noise point introduction. In this manuscript a new technique named SIFT-based iris and SIFT-based finger vein identification with normalization and enhancement is proposed for achieving better performance. In evaluation with other SIFT-based iris or SIFT-based finger vein recognition algorithms, the suggested technique can overcome the difficulties of tremendous key-point extraction and exclude the noise points without feature loss. Experimental results demonstrate that the normalization and improvement steps are critical for SIFT-based recognition for iris and finger vein , and the proposed technique can accomplish satisfactory recognition performance. Keywords: SIFT, Iris Recognition, Finger Vein identification and Biometric Systems.   © 2018 JASET, International Scholars and Researchers Association    


2011 ◽  
Vol 6 (4) ◽  
pp. 1-6
Author(s):  
Ayesha Butalia ◽  
◽  
A.K. Ramani ◽  
Parag Kulkarni ◽  
Swapnil Patil ◽  
...  

2019 ◽  
Author(s):  
Alex Bertrams ◽  
Katja Schlegel

People high in autistic-like traits have been found to have difficulties with recognizing emotions from nonverbal expressions. However, findings on the autism—emotion recognition relationship are inconsistent. In the present study, we investigated whether speeded reasoning ability (reasoning performance under time pressure) moderates the inverse relationship between autistic-like traits and emotion recognition performance. We expected the negative correlation between autistic-like traits and emotion recognition to be less strong when speeded reasoning ability was high. MTurkers (N = 217) completed the ten item version of the Autism Spectrum Quotient (AQ-10), two emotion recognition tests using videos with sound (Geneva Emotion Recognition Test, GERT-S) and pictures (Reading the Mind in the Eyes Test, RMET), and Baddeley's Grammatical Reasoning test to measure speeded reasoning. As expected, the higher the ability in speeded reasoning, the less were higher autistic-like traits related to lower emotion recognition performance. These results suggest that a high ability in making quick mental inferences may (partly) compensate for difficulties with intuitive emotion recognition related to autistic-like traits.


Author(s):  
Khamis A. Al-Karawi

Background & Objective: Speaker Recognition (SR) techniques have been developed into a relatively mature status over the past few decades through development work. Existing methods typically use robust features extracted from clean speech signals, and therefore in idealized conditions can achieve very high recognition accuracy. For critical applications, such as security and forensics, robustness and reliability of the system are crucial. Methods: The background noise and reverberation as often occur in many real-world applications are known to compromise recognition performance. To improve the performance of speaker verification systems, an effective and robust technique is proposed to extract features for speech processing, capable of operating in the clean and noisy condition. Mel Frequency Cepstrum Coefficients (MFCCs) and Gammatone Frequency Cepstral Coefficients (GFCC) are the mature techniques and the most common features, which are used for speaker recognition. MFCCs are calculated from the log energies in frequency bands distributed over a mel scale. While GFCC has been acquired from a bank of Gammatone filters, which was originally suggested to model human cochlear filtering. This paper investigates the performance of GFCC and the conventional MFCC feature in clean and noisy conditions. The effects of the Signal-to-Noise Ratio (SNR) and language mismatch on the system performance have been taken into account in this work. Conclusion: Experimental results have shown significant improvement in system performance in terms of reduced equal error rate and detection error trade-off. Performance in terms of recognition rates under various types of noise, various Signal-to-Noise Ratios (SNRs) was quantified via simulation. Results of the study are also presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document