automatic transcription
Recently Published Documents


TOTAL DOCUMENTS

156
(FIVE YEARS 22)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Thomas Soroski ◽  
Thiago da Cunha Vasco ◽  
Sally Newton-Mason ◽  
Saffrin Granby ◽  
Caitlin Lewis ◽  
...  

BACKGROUND Speech data for medical research can be collected non-invasively and in large volumes. Speech analysis has shown promise in diagnosing neurodegenerative disease. To effectively leverage speech data, transcription is important as there is valuable information contained in lexical content. Manual transcription, while highly accurate, limits potential scalability and cost savings associated with language-based screening. OBJECTIVE To better understand the use of automatic transcription for classification of neurodegenerative disease (Alzheimer’s Disease [AD], mild cognitive impairment [MCI] or subjective memory complaints [SMC] versus healthy controls), we compared automatically generated transcripts against transcripts that went through manual correction. METHODS We recruited individuals from a memory clinic (“patients”) with a diagnosis of mild-moderate AD, (n=44), MCI (n=20), SMC (n=8) and healthy controls living in the community (n=77). Participants were asked to describe a standardized picture, read a paragraph, and recall a pleasant life experience. We compared transcripts generated using Google speech-to-text software to manually-verified transcripts by examining transcription confidence scores, transcription error rates, and machine learning classification accuracy. For the classification tasks, Logistic Regression, Gaussian Naive Bayes, and Random Forests were used. RESULTS The transcription software showed higher confidence scores (P<.001) and lower error rates (P>.05) for speech from healthy controls as compared with patients. Classification models using human-verified transcripts significantly (P<.001) outperformed automatically-generated transcript models for both spontaneous speech tasks. This comparison showed no difference in the reading task. Manually adding pauses to transcripts had no impact on classification performance. Manually correcting both spontaneous speech tasks led to significantly higher performances in the machine learning models. CONCLUSIONS We found that automatically-transcribed speech data could be used to distinguish patients with a diagnosis of AD, MCI or SMC from controls. We recommend a human verification step to improve the performance of automatic transcripts, especially for spontaneous tasks. Moreover, human verification can focus on correcting errors and adding punctuation to transcripts. Manual addition of pauses, however, is not needed, which can simplify the human verification step to more efficiently process large volumes of speech data.


2021 ◽  
Vol 11 (11) ◽  
pp. 4894
Author(s):  
Anna Scius-Bertrand ◽  
Michael Jungo ◽  
Beat Wolf ◽  
Andreas Fischer ◽  
Marc Bui

The current state of the art for automatic transcription of historical manuscripts is typically limited by the requirement of human-annotated learning samples, which are are necessary to train specific machine learning models for specific languages and scripts. Transcription alignment is a simpler task that aims to find a correspondence between text in the scanned image and its existing Unicode counterpart, a correspondence which can then be used as training data. The alignment task can be approached with heuristic methods dedicated to certain types of manuscripts, or with weakly trained systems reducing the required amount of annotations. In this article, we propose a novel learning-based alignment method based on fully convolutional object detection that does not require any human annotation at all. Instead, the object detection system is initially trained on synthetic printed pages using a font and then adapted to the real manuscripts by means of self-training. On a dataset of historical Vietnamese handwriting, we demonstrate the feasibility of annotation-free alignment as well as the positive impact of self-training on the character detection accuracy, reaching a detection accuracy of 96.4% with a YOLOv5m model without using any human annotation.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Oliver Adams

This paper reports on progress integrating the speech recognition toolkit ESPnet into Elpis, a web front-end originally designed to provide access to the Kaldi automatic speech recognition toolkit. The goal of this work is to make end-to-end speech recognition models available to language workers via a user-friendly graphical interface. Encouraging results are reported on (i) development of an ESPnet recipe for use in Elpis, with preliminary results on data sets previously used for training acoustic models with the Persephone toolkit along with a new data set that had not previously been used in speech recognition, and (ii) incorporating ESPnet into Elpis along with UI enhancements and a CUDA-supported Dockerfile.


Author(s):  
Daniel Schneider ◽  
Nikolaus Korfhage ◽  
Markus Mühling ◽  
Peter Lüttig ◽  
Bernd Freisleben

2020 ◽  
Vol 65 (1) ◽  
pp. 37-52
Author(s):  
Adinel C. Dincă ◽  
Emil Ștețco

"The objective of the present paper is to introduce to a wider audience, at a very early stage of development, the initial results of a Romanian joint initiative of AI software engineers and palaeographers in an experimental project aiming to assist and improve the transcription effort of medieval texts with AI software solutions, uniquely designed and trained for the task. Our description will start by summarizing the previous attempts and the mixed-results achieved in e-palaeography so far, a continuously growing field of combined scholarship at an international level. The second part of the study describes the specific project, developed by Zetta Cloud, with the aim of demonstrating that, by applying state of the art AI Computer Vision algorithms, it is possible to automatically binarize and segment text images with the final scope of intelligently extracting the content from a sample set of medieval handwritten text pages. Keywords: Middle Ages, Latin writing, palaeography, Artificial Intelligence, Computer Vision, automatic transcription."


Author(s):  
Patrick Meyer ◽  
Samy Elshamy ◽  
Tim Fingscheidt

Abstract Microphone leakage or crosstalk is a common problem in multichannel close-talk audio recordings (e.g., meetings or live music performances), which occurs when a target signal does not only couple into its dedicated microphone, but also in all other microphone channels. For further signal processing such as automatic transcription of a meeting, a multichannel speaker interference reduction is required in order to eliminate the interfering speech signals in the microphone channels. The contribution of this paper is twofold: First, we consider multichannel close-talk recordings of a three-person meeting scenario with various different crosstalk levels. In order to eliminate the crosstalk in the target microphone channel, we extend a multichannel Wiener filter approach, which considers all individual microphone channels. Therefore, we integrate an adaptive filter method, which was originally proposed for acoustic echo cancellation (AEC), in order to obtain a well-performing interferer (noise) component estimation. This results in an improved speech-to-interferer ratio by up to 2.7 dB at constant or even better speech component quality. Second, since an AEC method requires typically clean reference channels, we investigate and report findings why the AEC algorithm is able to successfully estimate the interfering signals and the room impulse responses between the microphones of the interferer and the target speakers even though the reference signals are themselves disturbed by crosstalk in the considered meeting scenario.


Sign in / Sign up

Export Citation Format

Share Document