recognition score
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 13)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 7 (2) ◽  

Background: Deep band modulation (DBM) is an envelope enhancement strategy that enhances temporal modulation and may provide a cue for speech understanding among individuals who suffer from temporal processing deficits. Objective: To investigate the effect of deep band modulation on phrase recognition scores at different signal-to-noise ratios (SNRs) among older adults having hearing loss classified as good and poor performers based on temporal resolution ability. Method: Phrase recognition score was obtained for unprocessed and DBM phrases at three SNRs (4, 5, and - 4 dB signal to noise ratio) in 25 (age range 60 to 82 years, mean age 71.48 years) older adults having bilateral mild to moderately severe sloping sensorineural hearing loss. In addition, the gap detection test was also administered to the study participants. Results: A significant better recognition score was obtained in DBM than the unprocessed phrase. The magnitude of improvement from DBM was not the same in all the participants. Thus, the participants were classified into good and poor performers based on their temporal processing ability. The mean unprocessed and DBM phrase recognition scores in each SNR were higher for good performers than the poor performers. The benefit of deep band modulation was evident for the good performers, especially at high SNR, which was moderately correlated with age and temporal processing ability. Conclusion: The benefit from DBM on recognition score for the good performers is predicted from the temporal resolution abilities and age. However, the benefit is minuscule for the poor performers in noise.


2021 ◽  
Vol 34 (4) ◽  
pp. 130-141
Author(s):  
Atheel Sabih Shaker

     The brain's magnetic resonance imaging (MRI) is tasked with finding the pixels or voxels that establish where the brain is in a medical image The Convolutional Neural Network (CNN) can process curved baselines that frequently occur in scanned documents. Next, the lines are separated into characters. In the Convolutional Neural Network (CNN) can process curved baselines that frequently occur in scanned documents case of fonts with a fixed MRI width, the gaps are analyzed and split. Otherwise, a limited region above the baseline is analyzed, separated, and classified. The words with the lowest recognition score are split into further characters x until the result improves. If this does not improve the recognition score, contours are merged and classified again to check the change in the recognition score. The features for classification are extracted from small fixed-size patches over neighboring contours and matched against the trained deep learning representations this approach enables Tesseract to easily handle MRI sample results broken into multiple parts, which is impossible if each contour is processed separately Hard to read! Try to split sentences. The CNN inception network seem to be a suitable choice for the evaluation of the synthetic MRI samples with 3000 features, and 12000 samples of images as data augmentation capacities favors data which is similar to the original training set and thus unlikely to contain new information content with an accuracy of 98.68%. The error is only 1.32% with the increasing the number of training samples, but the most significant impact in reducing the error can be made by increasing the number of samples.


ORL ◽  
2021 ◽  
pp. 1-7
Author(s):  
Elizabeth Ritter ◽  
Craig Miller ◽  
Justin Morse ◽  
Princess Onuorah ◽  
Abdullah Zeaton ◽  
...  

<b><i>Introduction:</i></b> The coronavirus 2019 pandemic has altered how modern healthcare is delivered to patients. Concerns have been raised that masks may hinder effective communication, particularly in patients with hearing loss. The purpose of this study is to determine the effect of masks on speech recognition in adult patients with and without self-reported hearing loss in a clinical setting. <b><i>Methods:</i></b> Adult patients presenting to an otolaryngology clinic were recruited. A digital recording of 36 spondaic words was presented to each participant in a standard clinical exam room. Each word was recorded in 1 of 3 conditions: no mask, surgical mask, or N95 mask. Participants were instructed to repeat back the word. The word recognition score was determined by the percent correctly repeated. <b><i>Results:</i></b> A total of 45 participants were included in this study. Overall, the mean word recognition score was 87% without a mask, 78% with a surgical mask, and 61% with an N95 mask. Among the 23 subjects (51.1%) with self-reported hearing loss, the average word recognition score was 46% with an N95 mask compared to 79% in patients who reported normal hearing (<i>p</i> &#x3c; 0.001). <b><i>Conclusion:</i></b> Our results suggest that masks significantly decrease word recognition, and this effect is exacerbated with N95 masks, particularly in patients with hearing loss. As masks are essential to allow for safe patient-physician interactions, it is imperative that clinicians are aware they may create a barrier to effective communication.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254623
Author(s):  
Sayaka Wada ◽  
Motoyasu Honma ◽  
Yuri Masaoka ◽  
Masaki Yoshida ◽  
Nobuyoshi Koiwa ◽  
...  

Emotion recognition is known to change with age, but associations between the change and brain atrophy are not well understood. In the current study atrophied brain regions associated with emotion recognition were investigated in elderly and younger participants. Group comparison showed no difference in emotion recognition score, while the score was associated with years of education, not age. We measured the gray matter volume of 18 regions of interest including the bilateral precuneus, supramarginal gyrus, orbital gyrus, straight gyrus, superior temporal sulcus, inferior frontal gyrus, insular cortex, amygdala, and hippocampus, which have been associated with social function and emotion recognition. Brain reductions were observed in elderly group except left inferior frontal gyrus, left straight gyrus, right orbital gyrus, right inferior frontal gyrus, and right supramarginal gyrus. Path analysis was performed using the following variables: age, years of education, emotion recognition score, and the 5 regions that were not different between the groups. The analysis revealed that years of education were associated with volumes of the right orbital gyrus, right inferior frontal gyrus, and right supramarginal gyrus. Furthermore, the right supramarginal gyrus volume was associated with the emotion recognition score. These results suggest that the amount of education received contributes to maintain the right supramarginal gyrus volume, and indirectly affects emotion recognition ability.


Author(s):  
E McCarty Walsh ◽  
D R Morrison ◽  
W J McFeely

Abstract Objectives This study aimed to evaluate hearing outcomes and device safety in a large, single-surgeon experience with the totally implantable active middle-ear implants. Methods This was a retrospective case series review of 116 patients with moderate-to-severe sensorineural hearing loss undergoing implantation of active middle-ear implants. Results Mean baseline unaided pure tone average improved from 57.6 dB before surgery to 34.1 dB post-operatively, signifying a mean gain in pure tone average of 23.5 dB (p = 0.0002). Phonetically balanced maximum word recognition score improved slightly from 70.5 per cent to 75.8 per cent (p = 0.416), and word recognition score at a hearing level of 50 dB values increased substantially from 14.4 per cent to 70.4 per cent (p < 0.0001). Both revision and explant rates were low and dropped with increasing surgeon experience over time. Conclusion This study showed excellent post-operative hearing results with active middle-ear implants with regard to pure tone average and word recognition score at a hearing level of 50 db. Complication rates in this case series were significantly lower with increasing experience of the surgeon. Active middle-ear implants should be considered in appropriate patients with moderate-to-severe sensorineural hearing loss who have struggled with conventional amplification and are good surgical candidates.


2021 ◽  
Author(s):  
Nina Suess ◽  
Anne Hauswald ◽  
Verena Zehentner ◽  
Jessica Depireux ◽  
Gudrun Herzog ◽  
...  

Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lip read. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lip reading abilities and (2) provide a tool to assess lip reading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken and the task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We found an effect of item categories, articulation, sentence frequency, and sentence length on the recognition score, but no effect of word frequency or version of the test. With respect to hearing impairment we found that higher subjective hearing impairment is associated with higher test score. We did not find any evidence that prelingually deaf individuals show enhanced lip reading skills over people with postlingual acquired hearing impairment. However, we see an effect of education on enhanced lip reading skills only in the prelingual deaf, but not in the population with postlingual acquired hearing loss. This points to the fact that there are different factors contributing to enhanced lip reading abilities depending on the onset of hearing impairment. Overall, lip reading skills vary strongly in the general population independent of hearing impairment. Based on our findings we constructed a new and efficient lipreading assessment tool (SaLT) that can be used to test behavioural lip reading abilities in the German speaking population.


Author(s):  
E A Guneri ◽  
A Cakir Cetin

Abstract Objective To compare the results of endoscopic and microscopic ossicular chain reconstruction surgery. Methods Patients undergoing ossicular chain reconstruction surgery via an endoscopic (n = 31) or microscopic (n = 34) technique were analysed for age, gender, Middle Ear Risk Index, ossicular chain defect, incision type, ossicular chain reconstruction surgery material, mean air conduction threshold, air–bone gap, air–bone gap gain, word recognition score, mean operation duration and mean post-operative follow up. Results Post-operative air conduction, air–bone gap and word recognition score improved significantly in both groups (within-subject p < 0.001 for air conduction and air–bone gap, and 0.026 for word recognition score); differences between groups were not significant (between-subject p = 0.192 for air conduction, 0.102 for air–bone gap, and 0.709 for word recognition score). Other parameters were similar between groups, except for incision type. However, endoscopic ossicular chain reconstruction surgery was associated with a significantly shorter operation duration (p < 0.001). Conclusion Endoscopic ossicular chain reconstruction surgery can achieve comparable surgical and audiological outcomes to those of microscopic ossicular chain reconstruction surgery in a shorter time.


HNO ◽  
2020 ◽  
Author(s):  
T. Rahne ◽  
S. K. Plontke ◽  
D. Vordermark ◽  
C. Strauss ◽  
C. Scheller

Zusammenfassung Hintergrund Die Klassifikation der Hörfunktion bei Patienten mit Vestibularisschwannom wird oft nach Gardner und Robertson (1988) oder Maßgaben der American Academy of Otolaryngology – Head and Neck Surgery (AAO-HNS, 1995) vorgenommen. Diesen Klassifikationssystemen liegen englische Sprachtestverfahren zugrunde. Eine deutschsprachige Entsprechung existiert nicht. Ziel der Arbeit ist die Untersuchung des Einflusses verschiedener Zielparameter auf die Hörklassifikation und die Ableitung einer Empfehlung für die Verwendung deutschsprachiger Testverfahren. Material und Methoden Die auf englischsprachigen Testverfahren für die Sprachaudiometrie beruhenden Regeln wurden für deutsches Sprachmaterial fortgeschrieben. Darauf basierend wurde an einer Kohorte von 91 Patienten mit Vestibularisschwannom Reintonhörschwellen, Sprachverständlichkeitsschwelle und Sprachverständlichkeit bei verschiedenen Schalldruckpegeln gemessen und das Hörvermögen nach den Klassifizierungen Gardner und Robertson (1988) und AAO-HNS (1995) kategorisiert. Ergebnisse Sowohl in der Gardner-Robertson-Klassifizierung als auch in der Klassifikation nach AAO-HNS ist die Anzahl der Patienten in den Hörklassen mit einer gut versorgbaren Hörschädigung (gemessen als Puretone-Average von drei (3PTA) oder vier Frequenzen (4PTA)) am höchsten, wenn der 3PTA0,5;1;2 kHz verwendet wurde, gefolgt vom 4PTA0,5;1;2;3 kHz, 4PTA0,5;1;2;4 kHz und 4PTA0,5;1;2;“3”kHz. Wird das maximale Sprachverstehen (Word Recognition Score, WRSmax) anstelle des WRS bei 40 dB Sensation Level (WRS40SL) verwendet, steigt die Anzahl der Patienten in den Hörklassen mit gut versorgbarer Hörschädigung unabhängig vom verwendeten Reintonhörschwellenmittelwert leicht. Schlussfolgerung Die Klassifizierung der Hörfunktion nach Gardner und Robertson sowie AAO-HNS kann im deutschsprachigen Raum angewendet werden. Für die Bestimmung der Sprachverständlichkeit bzw. der maximalen Sprachverständlichkeit kann der Freiburger Einsilbertest verwendet werden.


Sign in / Sign up

Export Citation Format

Share Document