scholarly journals Accuracy of Speech Recognition System’s Medical Report and Physicians' Experience in Hospitals

2019 ◽  
Vol 8 (1) ◽  
pp. 19
Author(s):  
Zahra Karbasi ◽  
Kambiz Bahaadinbeigy ◽  
Leila Ahmadian ◽  
Reza Khajouei ◽  
Moghaddameh Mirzaee

Introduction: Speech recognition(SR) technology has been existing for more than two decades. But, it has been rarely used in health care institutions and not applied uniformly in all the clinical domains. The aim of this study was to investigate the accuracy of speech recognition system in four different situations in the real environment of health services. We also report physicians' experience of using speech recognition technology.Method:. To do this study, NEVISA SR software professional v.3 was installed on the computers of expert physicians. The pre-designated medical report was tested by the physicians in four different modes including slow expression in a silent environment, slow expression in crowded environments, rapid expression in a silent environment and rapid expression in a busy environment.  After using the speech recognition software by 15 physicians in hospitals, a designed questionnaire was distributed among them. .Results: The results showed that the highest average accuracy of speech recognition software was in the silent environment by slow expression and the minimum average accuracy was in the busy environment by rapid expression. Of all the participants in the study, 53.3% of the physicians believed that the use of speech recognition system promoted the workflow.Conclusion: We found that software accuracy was generally higher than the expectation and its use required to upgrade the system and its operation.  In order to achieve the highest level of recognition rate and error reduction by speech recognition, influential factors such as environmental noise, type of software or hardware, training and experience of participants can be also considered.

2019 ◽  
Vol 9 (10) ◽  
pp. 2166 ◽  
Author(s):  
Mohamed Tamazin ◽  
Ahmed Gouda ◽  
Mohamed Khedr

Many new consumer applications are based on the use of automatic speech recognition (ASR) systems, such as voice command interfaces, speech-to-text applications, and data entry processes. Although ASR systems have remarkably improved in recent decades, the speech recognition system performance still significantly degrades in the presence of noisy environments. Developing a robust ASR system that can work in real-world noise and other acoustic distorting conditions is an attractive research topic. Many advanced algorithms have been developed in the literature to deal with this problem; most of these algorithms are based on modeling the behavior of the human auditory system with perceived noisy speech. In this research, the power-normalized cepstral coefficient (PNCC) system is modified to increase robustness against the different types of environmental noises, where a new technique based on gammatone channel filtering combined with channel bias minimization is used to suppress the noise effects. The TIDIGITS database is utilized to evaluate the performance of the proposed system in comparison to the state-of-the-art techniques in the presence of additive white Gaussian noise (AWGN) and seven different types of environmental noises. In this research, one word is recognized from a set containing 11 possibilities only. The experimental results showed that the proposed method provides significant improvements in the recognition accuracy at low signal to noise ratios (SNR). In the case of subway noise at SNR = 5 dB, the proposed method outperforms the mel-frequency cepstral coefficient (MFCC) and relative spectral (RASTA)–perceptual linear predictive (PLP) methods by 55% and 47%, respectively. Moreover, the recognition rate of the proposed method is higher than the gammatone frequency cepstral coefficient (GFCC) and PNCC methods in the case of car noise. It is enhanced by 40% in comparison to the GFCC method at SNR 0dB, while it is improved by 20% in comparison to the PNCC method at SNR −5dB.


2013 ◽  
Vol 846-847 ◽  
pp. 1380-1383
Author(s):  
Xian Yi Rui ◽  
Yi Biao Yu ◽  
Ying Jiang

Because of the single-syllable of Chinese words and the confusing nature of Chinese pronunciation, connected mandarin digit speech recognition (CMDSR) is a challenging task in the field of speech recognition. This paper applied a novel acoustic representation of speech, called the acoustic universal structure (AUS) where the non-linguistic variations such as vocal tract length, lines and noises are well removed. A two-layer matching strategy based on the AUS models of speech, including the digit and string AUS models, is proposed for connected mandarin digit speech recognition. The speech recognition system for connected mandarin digits is described in detail, and the experimental results show that the proposed method can obtain the higher recognition rate.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lifang He ◽  
Gaimin Jin ◽  
Sang-Bing Tsai

This article uses Field Programmable Gate Array (FPGA) as a carrier and uses IP core to form a System on Programmable Chip (SOPC) English speech recognition system. The SOPC system uses a modular hardware system design method. Except for the independent development of the hardware acceleration module and its control module, the other modules are implemented by software or IP provided by Xilinx development tools. Hardware acceleration IP adopts a top-down design method, provides parallel operation of multiple operation components, and uses pipeline technology, which speeds up data operation, so that only one operation cycle is required to obtain an operation result. In terms of recognition algorithm, a more effective training algorithm is proposed, Genetic Continuous Hidden Markov Model (GA_CHMM), which uses genetic algorithm to directly train CHMM model. It is to find the optimal model by encoding the parameter values of the CHMM and performing operations such as selection, crossover, and mutation according to the fitness function. The optimal parameter value after decoding corresponds to the CHMM model, and then the English speech recognition is performed through the CHMM algorithm. This algorithm can save a lot of training time, thereby improving the recognition rate and speed. This paper studies the optimization of embedded system software. By studying the fixed-point software algorithm and the optimization of system storage space, the real-time response speed of the system has been reduced from about 10 seconds to an average of 220 milliseconds. Through the optimization of the CHMM algorithm, the real-time performance of the system is improved again, and the average time to complete the recognition is significantly shortened. At the same time, the system can achieve a recognition rate of over 90% when the English speech vocabulary is less than 200.


2012 ◽  
Vol 2012 ◽  
pp. 1-9
Author(s):  
Peng Dai ◽  
Ing Yann Soon ◽  
Rui Tao

A new log-power domain feature enhancement algorithm named NLPS is developed. It consists of two parts, direct solution of nonlinear system model and log-power subtraction. In contrast to other methods, the proposed algorithm does not need prior speech/noise statistical model. Instead, it works by direct solution of the nonlinear function derived from the speech recognition system. Separate steps are utilized to refine the accuracy of estimated cepstrum by log-power subtraction, which is the second part of the proposed algorithm. The proposed algorithm manages to solve the speech probability distribution function (PDF) discontinuity problem caused by traditional spectral subtraction series algorithms. The effectiveness of the proposed filter is extensively compared using the standard database, AURORA2. The results show that significant improvement can be achieved by incorporating the proposed algorithm. The proposed algorithm reaches a recognition rate of over 86% for noisy speech (average from SNR 0 dB to 20 dB), which means a 48% error reduction over the baseline Mel-frequency Cepstral Coefficient (MFCC) system.


Author(s):  
Wening Mustikarini ◽  
Risanuri Hidayat ◽  
Agus Bejo

Abstract — Automatic Speech Recognition (ASR) is a technology that uses machines to process and recognize human voice. One way to increase recognition rate is to use a model of language you want to recognize. In this paper, a speech recognition application is introduced to recognize words "atas" (up), "bawah" (down), "kanan" (right), and "kiri" (left). This research used 400 samples of speech data, 75 samples from each word for training data and 25 samples for each word for test data. This speech recognition system was designed using Mel Frequency Cepstral Coefficient (MFCC) as many as 13 coefficients as features and Support Vector Machine (SVM) as identifiers. The system was tested with linear kernels and RBF, various cost values, and three sample sizes (n = 25, 75, 50). The best average accuracy value was obtained from SVM using linear kernels, a cost value of 100 and a data set consisted of 75 samples from each class. During the training phase, the system showed a f1-score (trade-off value between precision and recall) of 80% for the word "atas", 86% for the word "bawah", 81% for the word "kanan", and 100% for the word "kiri". Whereas by using 25 new samples per class for system testing phase, the f1-score was 76% for the "atas" class, 54% for the "bawah" class, 44% for the "kanan" class, and 100% for the "kiri" class.


Author(s):  
Md Mijanur Rahman ◽  
Fatema Khatun

This research devoted to the development of Speech Recognition System in Bengali language that works with speaker independent, isolated and subword-unit-based approaches. In our work, the original Bangla speech words were recorded and stored as RIFF (.wav) file. Then these words were classified into three different groups according to the number of syllables of the speech words and these grouping speech signals were converted to digital form, in order to extract features. The features were extracted by the method of Mel Frequency Cepstrum Coefficient (MFCC) analysis. The recognition system includes direct Euclidean distance measurement technique. The test database contained 600 distinct Bangla speech words and each word was recorded from six different speakers. The development software is written in Turbo C and common feature of today’s software have been included. The development system achieved recognition rate at about 96% for single speaker and 84.28% for multiple speakers. Keywords: MFCC; Syllable-based grouping; Speaker independent; End-point detection; Euclidian distance. DOI: http://dx.doi.org/10.3329/diujst.v6i1.9331 DIUJST 2011; 6(1): 30-35


Author(s):  
Lery Sakti Ramba

The purpose of this research is to design home automation system that can be controlled using voice commands. This research was conducted by studying other research related to the topics in this research, discussing with competent parties, designing systems, testing systems, and conducting analyzes based on tests that have been done. In this research voice recognition system was designed using Deep Learning Convolutional Neural Networks (DL-CNN). The CNN model that has been designed will then be trained to recognize several kinds of voice commands. The result of this research is a speech recognition system that can be used to control several electronic devices connected to the system. The speech recognition system in this research has a 100% success rate in room conditions with background intensity of 24dB (silent), 67.67% in room conditions with 42dB background noise intensity, and only 51.67% in room conditions with background intensity noise 52dB (noisy). The percentage of the success of the speech recognition system in this research is strongly influenced by the intensity of background noise in a room. Therefore, to obtain optimal results, the speech recognition system in this research is more suitable for use in rooms with low intensity background noise.


Sign in / Sign up

Export Citation Format

Share Document