Auditory speech processing for scale-shift covariance and its evaluation in automatic speech recognition

Author(s):  
Roy D. Patterson ◽  
Thomas C. Walters ◽  
Jessica Monaghan ◽  
Christian Feldbauer ◽  
Toshio Irino
2017 ◽  
Vol 60 (9) ◽  
pp. 2394-2405 ◽  
Author(s):  
Lionel Fontan ◽  
Isabelle Ferrané ◽  
Jérôme Farinas ◽  
Julien Pinquier ◽  
Julien Tardieu ◽  
...  

Purpose The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Method Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Results Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Conclusion Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.


Author(s):  
Tim Arnold ◽  
Helen J. A. Fuller

Automatic speech recognition (ASR) systems and speech interfaces are becoming increasingly prevalent. This includes increases in and expansion of use of these technologies for supporting work in health care. Computer-based speech processing has been extensively studied and developed over decades. Speech processing tools have been fine-tuned through the work of Speech and Language Researchers. Researchers have previously and continue to describe speech processing errors in medicine. The discussion provided in this paper proposes an ergonomic framework for speech recognition to expand and further describe this view of speech processing in supporting clinical work. With this end in mind, we hope to build on previous work and emphasize the need for increased human factors involvement in this area while also facilitating the discussion of speech recognition in contexts that have been explored in the human factors domain. Human factors expertise can contribute through proactively describing and designing these critical interconnected socio-technical systems with error-tolerance in mind.


2009 ◽  
pp. 128-148
Author(s):  
Eric Petajan

Automatic Speech Recognition (ASR) is the most natural input modality from humans to machines. When the hands are busy or a full keyboard is not available, speech input is especially in demand. Since the most compelling application scenarios for ASR include noisy environments (mobile phones, public kiosks, cars), visual speech processing must be incorporated to provide robust performance. This chapter motivates and describes the MPEG-4 Face and Body Animation (FBA) standard for representing visual speech data as part of a whole virtual human specification. The super low bit-rate FBA codec included with the standard enables thin clients to access processing and communication services over any network including enhanced visual communication, animated entertainment, man-machine dialog, and audio/visual speech recognition.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 493-493
Author(s):  
Nancy Hodgson ◽  
Ani Nencova ◽  
Laura Gitlin ◽  
Emily Summerhayes

Abstract Careful fidelity monitoring is critical to implementing evidence-based interventions in dementia care settings to ensure that the intervention is delivered consistently and as intended. Most approaches to fidelity monitoring rely on human coding of content that has been covered during a session or of stylistic aspects of the intervention, including rapport, empathy, enthusiasm and are unrealistic to implement on a large scale in real world settings. Technological advances in automatic speech recognition and language and speech processing offers potential solutions to overcome these barriers. We compare three commercial automatic speech recognition tools on spoken content drawn from dementia care interactions to determine the accuracy of recognition and the guarantees for privacy offered by each provider. Data were obtained from recorded sessions of the Dementia Behavior Study intervention trial (NCT01892579). We find that despite their impressive performance in general applications, automatic speech recognition systems work less well for older adults and people of color. We outline a plan for automating fidelity in interaction style and content which would be integrated in an online program for training dementia care providers.


2013 ◽  
Vol 24 (07) ◽  
pp. 535-543 ◽  
Author(s):  
Stephanie Nagle ◽  
Frank E. Musiek ◽  
Eric H. Kossoff ◽  
George Jallo ◽  
Dana Boatman-Reich

Background: The role of the right temporal lobe in processing speech is not well understood. Although the left temporal lobe has long been recognized as critical for speech perception, there is growing evidence for right hemisphere involvement. To investigate whether the right temporal lobe is critical for auditory speech processing, we studied prospectively a normal-hearing patient who underwent consecutive right temporal lobe resections for treatment of medically intractable seizures. Purpose: To test the hypothesis that the right temporal lobe is critical for auditory speech processing. Research Design: We used a prospective, repeated-measure, single-case design. Auditory processing was evaluated using behavioral tests of speech recognition (words, sentences) under multiple listening conditions (e.g., quiet, background noise, etc.). Auditory processing of nonspeech sounds was measured by pitch pattern sequencing and environmental sound recognition tasks. Data Collection: Repeat behavioral testing was performed at four time points over a 2 yr period: before and after consecutive right temporal lobe resection surgeries. Results: Before surgery, the patient demonstrated normal speech recognition in quiet and under real-world listening conditions (background noise, filtered speech). After the initial right anterior temporal resection, speech recognition scores declined under adverse listening conditions, especially for the left ear, but remained largely within normal limits. Following resection of the right superior temporal gyrus 1 yr later, speech recognition in quiet and nonspeech sound processing (pitch patterns, environmental sounds) remained intact. However, speech recognition under adverse listening conditions was severely impaired. Conclusions: The right superior temporal gyrus appears to be critical for auditory processing of speech under real-world listening conditions.


2017 ◽  
Author(s):  
Thomas Schatz ◽  
Francis Bach ◽  
Emmanuel Dupoux

We test the potential of standard Automatic Speech Recognition (ASR) systems trained on large corpora of continuous speech as quantitative models of human speech processing. In human adults, speech perception is attuned to efficiently process native speech sounds, at the expense of difficulties in pro- cessing non-native sounds. We use ABX-discriminability measures to test whether ASR models can account for the patterns of confusion between speech sounds observed in humans. We show that ASR models reproduce some well-documented effects in non-native phonetic perception. Beyond the immediate results, our methodology opens up the possibility of a more systematic investigation of phonetic category perception in humans.


2021 ◽  
Author(s):  
Daniel Senkowski ◽  
James K. Moran

AbstractObjectivesPeople with Schizophrenia (SZ) show deficits in auditory and audiovisual speech recognition. It is possible that these deficits are related to aberrant early sensory processing, combined with an impaired ability to utilize visual cues to improve speech recognition. In this electroencephalography study we tested this by having SZ and healthy controls (HC) identify different unisensory auditory and bisensory audiovisual syllables at different auditory noise levels.MethodsSZ (N = 24) and HC (N = 21) identified one of three different syllables (/da/, /ga/, /ta/) at three different noise levels (no, low, high). Half the trials were unisensory auditory and the other half provided additional visual input of moving lips. Task-evoked mediofrontal N1 and P2 brain potentials triggered to the onset of the auditory syllables were derived and related to behavioral performance.ResultsIn comparison to HC, SZ showed speech recognition deficits for unisensory and bisensory stimuli. These deficits were primarily found in the no noise condition. Paralleling these observations, reduced N1 amplitudes to unisensory and bisensory stimuli in SZ were found in the no noise condition. In HC the N1 amplitudes were positively related to the speech recognition performance, whereas no such relationships were found in SZ. Moreover, no group differences in multisensory speech recognition benefits and N1 suppression effects for bisensory stimuli were observed.ConclusionOur study shows that reduced N1 amplitudes relate to auditory and audiovisual speech processing deficits in SZ. The findings that the amplitude effects were confined to salient speech stimuli and the attenuated relationship with behavioral performance, compared to HC, indicates a diminished decoding of the auditory speech signals in SZs. Our study also revealed intact multisensory benefits in SZs, which indicates that the observed auditory and audiovisual speech recognition deficits were primarily related to aberrant auditory speech processing.HighlightsSpeech processing deficits in schizophrenia related to reduced N1 amplitudes Audiovisual suppression effect in N1 preserved in schizophrenia Schizophrenia showed weakened P2 components in specifically audiovisual processing


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 235
Author(s):  
Natalia Bogach ◽  
Elena Boitsova ◽  
Sergey Chernonog ◽  
Anton Lamtev ◽  
Maria Lesnichaya ◽  
...  

This article contributes to the discourse on how contemporary computer and information technology may help in improving foreign language learning not only by supporting better and more flexible workflow and digitizing study materials but also through creating completely new use cases made possible by technological improvements in signal processing algorithms. We discuss an approach and propose a holistic solution to teaching the phonological phenomena which are crucial for correct pronunciation, such as the phonemes; the energy and duration of syllables and pauses, which construct the phrasal rhythm; and the tone movement within an utterance, i.e., the phrasal intonation. The working prototype of StudyIntonation Computer-Assisted Pronunciation Training (CAPT) system is a tool for mobile devices, which offers a set of tasks based on a “listen and repeat” approach and gives the audio-visual feedback in real time. The present work summarizes the efforts taken to enrich the current version of this CAPT tool with two new functions: the phonetic transcription and rhythmic patterns of model and learner speech. Both are designed on a base of a third-party automatic speech recognition (ASR) library Kaldi, which was incorporated inside StudyIntonation signal processing software core. We also examine the scope of automatic speech recognition applicability within the CAPT system workflow and evaluate the Levenstein distance between the transcription made by human experts and that obtained automatically in our code. We developed an algorithm of rhythm reconstruction using acoustic and language ASR models. It is also shown that even having sufficiently correct production of phonemes, the learners do not produce a correct phrasal rhythm and intonation, and therefore, the joint training of sounds, rhythm and intonation within a single learning environment is beneficial. To mitigate the recording imperfections voice activity detection (VAD) is applied to all the speech records processed. The try-outs showed that StudyIntonation can create transcriptions and process rhythmic patterns, but some specific problems with connected speech transcription were detected. The learners feedback in the sense of pronunciation assessment was also updated and a conventional mechanism based on dynamic time warping (DTW) was combined with cross-recurrence quantification analysis (CRQA) approach, which resulted in a better discriminating ability. The CRQA metrics combined with those of DTW were shown to add to the accuracy of learner performance estimation. The major implications for computer-assisted English pronunciation teaching are discussed.


Sign in / Sign up

Export Citation Format

Share Document