Development of Speech Recognition Algorithm and LabView Model for Voice Command Control of Mobille Robot Motio

Author(s):  
Snejana Pleshkova ◽  
Zahari Zahariev ◽  
Alexander Bekiarski
2020 ◽  
Vol 164 ◽  
pp. 10015
Author(s):  
Irina Gurtueva ◽  
Olga Nagoeva ◽  
Inna Pshenokova

This paper proposes a concept of a new approach to the development of speech recognition systems using multi-agent neurocognitive modeling. The fundamental foundations of these developments are based on the theory of cognitive psychology and neuroscience, and advances in computer science. The purpose of this work is the development of general theoretical principles of sound image recognition by an intelligent robot and, as the sequence, the development of a universal system of automatic speech recognition, resistant to speech variability, not only with respect to the individual characteristics of the speaker, but also with respect to the diversity of accents. Based on the analysis of experimental data obtained from behavioral studies, as well as theoretical model ideas about the mechanisms of speech recognition from the point of view of psycholinguistic knowledge, an algorithm resistant to variety of accents for machine learning with imitation of the formation of a person’s phonemic hearing has been developed.


Author(s):  
Jiahao Chen ◽  
Ryota Nishimura ◽  
Norihide Kitaoka

Many end-to-end, large vocabulary, continuous speech recognition systems are now able to achieve better speech recognition performance than conventional systems. Most of these approaches are based on bidirectional networks and sequence-to-sequence modeling however, so automatic speech recognition (ASR) systems using such techniques need to wait for an entire segment of voice input to be entered before they can begin processing the data, resulting in a lengthy time-lag, which can be a serious drawback in some applications. An obvious solution to this problem is to develop a speech recognition algorithm capable of processing streaming data. Therefore, in this paper we explore the possibility of a streaming, online, ASR system for Japanese using a model based on unidirectional LSTMs trained using connectionist temporal classification (CTC) criteria, with local attention. Such an approach has not been well investigated for use with Japanese, as most Japanese-language ASR systems employ bidirectional networks. The best result for our proposed system during experimental evaluation was a character error rate of 9.87%.


2011 ◽  
Vol 28 (4) ◽  
pp. 50-55 ◽  
Author(s):  
乔兵 QIAO Bing ◽  
吴庆林 WU Qing-lin ◽  
阴玉梅 YIN Yu-mei

1996 ◽  
Vol 100 (4) ◽  
pp. 2788-2788
Author(s):  
Kazuo Nakata ◽  
Khoji Matsumoto

Sign in / Sign up

Export Citation Format

Share Document