auditory recognition
Recently Published Documents


TOTAL DOCUMENTS

121
(FIVE YEARS 12)

H-INDEX

26
(FIVE YEARS 1)

2022 ◽  
Vol 33 ◽  
pp. 102942
Author(s):  
Jan-Ole Radecke ◽  
Irina Schierholz ◽  
Andrej Kral ◽  
Thomas Lenarz ◽  
Micah M. Murray ◽  
...  

Author(s):  
xu chen ◽  
Shibo Wang ◽  
Houguang Liu ◽  
Jianhua Yang ◽  
Songyong Liu ◽  
...  

Abstract Many data-driven coal gangue recognition (CGR) methods based on the vibration or sound of collapsed coal and gangue have been proposed to achieve automatic CGR, which is important for realizing intelligent top-coal caving. However, the strong background noise and complex environment in underground coal mines render this task challenging in practical applications. Inspired by the fact that workers distinguish coal and gangue from underground noise by listening to the hydraulic support sound, we propose an auditory model based CGR method that simulates human auditory recognition by combining an auditory spectrogram with a convolutional neural network (CNN). First, we adjust the characteristic frequency (CF) distribution of the auditory peripheral model (APM) based on the spectral characteristics of collapsed sound signals from coal and gangue and then process the sound signals using the adjusted APM to obtain inferior colliculus auditory signals with multiple CFs. Subsequently, the auditory signals of all CFs are converted into gray images separately and then concatenated into a multichannel auditory spectrum along the channel dimension. Finally, we input the multichannel auditory spectrum as a feature map to the two-dimensional CNN, whose convolutional layers are used to automatically extract features, and the fully connected layer and softmax layer are used to flatten features and predict the recognition result, respectively. The CNN is optimized for the CGR based on a comparison study of four typical types of CNN structures with different network training hyperparameters. The experimental results show that this method affords an accurate CGR with a recognition accuracy of 99.5%. Moreover, this method offers excellent noise immunity compared with typically used CGR methods under various noisy conditions.


2021 ◽  
pp. 1-6
Author(s):  
Selma Yilar ◽  
Ilknur Tasdemir ◽  
Betul Koska ◽  
Esra Belen ◽  
Buse Cetinkaya ◽  
...  

<b><i>Objective:</i></b> Emotions are often conveyed via visual and together with the auditory mode in social interaction. We aimed to investigate the ability to recognize facial and/or auditory emotions in school-aged children with cochlear implantation and healthy controls. <b><i>Methods:</i></b> All participants were asked to respond to facial emotions of Ekman and Friesen’s pictures, then auditory emotions, and last, they were asked to respond to video-based dynamic synchronous facial and auditory emotions. <b><i>Results:</i></b> The mean accuracy rates in recognizing anger (<i>p</i> = 0.025), surprise (<i>p</i> = 0.029), and neutral (<i>p</i> = 0.029) faces were significantly worse in children with cochlear implants (CIs) than in healthy controls. They were significantly worse than healthy controls in recognizing all auditory emotions except auditory emotion of fear (<i>p</i> = 0.067). The mean accuracy rates in recognizing video-based auditory/facial emotions of surprise (<i>p</i> = 0.031) and neutral (<i>p</i> = 0.029) emotions were significantly worse in children with CIs. <b><i>Conclusion:</i></b> The children with hearing loss were poorer in recognizing surprise, anger, and neutral facial emotions than healthy children; they had similar performance in recognizing anger emotions when both stimuli were given synchronously which may have a positive effect on social behaviors. It seems beneficial that emotion recognition training should be included in rehabilitation programs.


2021 ◽  
Vol 64 (3) ◽  
pp. 965-978
Author(s):  
Beatriz de Diego-Lázaro ◽  
Andrea Pittman ◽  
María Adelaida Restrepo

Purpose The purpose of this study was to examine whether oral bilingualism could be an advantage for children with hearing loss when learning new words. Method Twenty monolingual and 13 bilingual children with hearing loss were compared with each other and with 20 monolingual and 20 bilingual children with normal hearing on receptive vocabulary and on three word-learning tasks containing nonsense words in familiar (English and Spanish) and unfamiliar (Arabic) languages. We measured word learning on the day of the training and retention the next day using an auditory recognition task. Analyses of covariance were used to compare performance on the word learning tasks by language group (monolingual vs. bilingual) and hearing status (normal hearing vs. hearing loss), controlling for age and maternal education. Results No significant differences were observed between monolingual and bilingual children with and without hearing loss in any of the word-learning task. Children with hearing loss performed more poorly than their hearing peers in Spanish word retention and Arabic word learning and retention. Conclusions Children with hearing loss who grew up being exposed to Spanish did not show higher or lower word-learning abilities than monolingual children with hearing loss exposed to English only. Therefore, oral bilingualism was neither an advantage nor a disadvantage for word learning. Hearing loss negatively affected performance in monolingual and bilingual children when learning words in languages other than English (the dominant language). Monolingual and bilingual children with hearing loss are equally at risk for word-learning difficulties and vocabulary size matters for word learning.


Revista CEFAC ◽  
2021 ◽  
Vol 23 (2) ◽  
Author(s):  
Rayane Ferreira da Silva ◽  
Karina Paes Advíncula ◽  
Priscila Aliança Gonçalves ◽  
Gabrielle Araújo Leite ◽  
Liliane Desgualdo Pereira ◽  
...  

ABSTRACT Purpose: to investigate the auditory recognition of intermittent speech in relation to different modulation rates and ages. Methods: 20 young people, 20 middle-aged adults, and 16 older adults, all of them with auditory thresholds equal to or lower than 25 dB HL up to the frequency of 4000 Hz. The participants were submitted to intermittent speech recognition tests presented in three modulation conditions: 4 Hz, 10 Hz, and 64 Hz. The percentages of correct answers were compared between age groups and modulation rates. ANOVA and post hoc tests were conducted to investigate the modulation rate effect, and the mixed linear regression model (p < 0.001). Results: regarding the age effect, the data showed a significant difference between young people and older adults, and between middle-aged and older adults. As for the modulation rate effect, the indexes of correct answers were significantly lower at the slower rate (4 Hz) in the three age groups. Conclusion: an age effect was verified on intermittent speech recognition: older adults have greater difficulty. A modulation rate effect was also noticed in the three age groups: the higher the rate, the better the performance.


2020 ◽  
pp. 026565902096996
Author(s):  
Damaris F Estrella-Castillo ◽  
Héctor Rubio-Zapata ◽  
Lizzette Gómez-de-Regil

Profound hearing loss can have serious and irreversible consequences for oral language development in children, affecting spoken and written language acquisition. Auditory-verbal therapy has been widely applied to children with hearing loss with promising results, mainly in developed countries where cochlear implants are available. An evaluation was done of auditory perception in 25 children 5 to 8 years of age, with profound hearing loss, users of 4- or 5-channel hearing aids, and enrolled in a personalized auditory-verbal therapy program. Regarding initial auditory perception skills, children performed better on the Noises and Sounds block than on the Language block. By subscales, top performance was observed for auditory analysis (Noises and Sounds) and auditory recognition (Language). A series of t-tests showed that significant improvement after Auditory-verbal therapy occurred in global scores for Noises and Sounds and for Language blocks, regardless of sex, urban or rural community origin, nuclear or extended family. The study provides evidence of deficiencies in auditory in children with profound bilateral hearing loss and how this might improve after receiving Auditory-verbal therapy. Nevertheless, the descriptive study design prevents conclusions regarding the effectiveness of the therapy. Subsequent research must take into account intrinsic and environmental factors that might play a mediating role in the benefits of Auditory-verbal therapy for auditory perception.


Ear & Hearing ◽  
2020 ◽  
Vol 41 (6) ◽  
pp. 1483-1491
Author(s):  
Sherri L. Smith ◽  
David B. Ryan ◽  
M. Kathleen Pichora-Fuller
Keyword(s):  

Author(s):  
Amin Ebrahimi ◽  
Mohammad Ebrahim Mahdavi ◽  
Hamid Jalilvand

Background and Aim: Digits are suitable speech materials for evaluating recognition of speech-in-noise in clients with the wide range of language abilities. Farsi Auditory Recognition of Digit-in-Noise (FARDIN) test has been deve­loped and validated in learning-disabled child­ren showing dichotic listening deficit. This stu­dy was conducted for further validation of FARDIN and to survey the effects of noise type on the recognition performance in individuals with sensory-neural hearing impairment. Methods: Persian monosyllabic digits 1−10 were extracted from the audio file of FARDIN test. Ten lists were compiled using a random order of the triplets. The first five lists were mixed with multi-talker babble noise (MTBN) and the second five lists with speech-spectrum noise (SSN). Signal- to- noise ratio (SNR) var­ied from +5 to −15 in 5 dB steps. 20 normal hearing and 19 hearing-impaired individuals participated in the current study. Results: Both types of noise could differentiate the hearing loss from normal hearing. Hearing-impaired group showed weaker performance for digit recognition in MTBN and SSN and needed 4−5.6 dB higher SNR (50%), compared to the normal hearing group. MTBN was more challenging for normal hearing than SSN. Conclusion: Farsi Auditory Recognition of Digit-in-Noise is a validated test for estimating SNR (50%) in clients with hearing loss. It seems SSN is more appropriate for using as a back­ground noise for testing the performance of aud­itory recognition of digit-in-noise.   Keywords: Auditory recognition; hearing loss; speech perception in noise; digit recognition in noise


Sign in / Sign up

Export Citation Format

Share Document