A High Accuracy Multiple-Command Speech Recognition ASIC Based on Configurable One-Dimension Convolutional Neural Network

Author(s):  
Lindong Wu ◽  
Zongwei Wang ◽  
Ming Zhao ◽  
Wei Hu ◽  
Yimao Cai ◽  
...  
2020 ◽  
Vol 4 (1) ◽  
pp. 34
Author(s):  
Duman Care Khrisne ◽  
Theresia Hendrawati

Games are considered capable of being used as a learning medium that can help teachers to teach children how to pronounce the Indonesian alphabet in early literacy, we try to build one aspect of the game in this study. The approach we use is a speech recognition approach that uses the convolutional neural network method. The results of this study indicate that CNN can recognize speech, with input data is in the form of sound. We use the MFCC feature vector sound feature to make a 3-dimensional matrix of input sound into CNN input. We also use the Sequential CNN architecture made from a simple 10 layer neural network, which produces a model with a small size, approximately only about 6 MB, with high accuracy (84%) and an F-Measure of 0.91.


Author(s):  
Shi-bo Pan ◽  
Di-lin Pan ◽  
Nan Pan ◽  
Xiao Ye ◽  
Miaohan Zhang

Traditional gun archiving methods are mostly carried out through bullets’ physics or photography, which are inefficient and difficult to trace, and cannot meet the needs of large-scale archiving. Aiming at such problems, a rapid archival technology of bullets based on graph convolutional neural network has been studied and developed. First, the spot laser is used to take the circle points of the bullet rifling traces. The obtained data is filtered and noise-reduced to make the corresponding line graph, and then the dynamic time warping (DTW) algorithm convolutional neural network model is used to perform the processing on the processed data. Not only is similarity matched, the rapid matching of the rifling of the bullet is also accomplished. Comparison of experimental results shows that this technology has the advantages of rapid archiving and high accuracy. Furthermore, it can be carried out in large numbers at the same time, and is more suitable for practical promotion and application.


Author(s):  
Kenta Shirane ◽  
Takahiro Yamamoto ◽  
Hiroyuki Tomiyama

In this paper, we present a case study on approximate multipliers for MNIST Convolutional Neural Network (CNN). We apply approximate multipliers with different bit-width to the convolution layer in MNIST CNN, evaluate the accuracy of MNIST classification, and analyze the trade-off between approximate multiplier’s area, critical path delay and the accuracy. Based on the results of the evaluation and analysis, we propose a design methodology for approximate multipliers. The approximate multipliers consist of some partial products, which are carefully selected according to the CNN input. With this methodology, we further reduce the area and the delay of the multipliers with keeping high accuracy of the MNIST classification.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 70
Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.


Sign in / Sign up

Export Citation Format

Share Document