scholarly journals Multilingual Convolutional, Long Short-Term Memory, Deep Neural Networks for Low Resource Speech Recognition

2017 ◽  
Vol 107 ◽  
pp. 842-847 ◽  
Author(s):  
Danish bukhari ◽  
Yutian Wang ◽  
Hui Wang
2015 ◽  
Vol 40 (2) ◽  
pp. 191-195 ◽  
Author(s):  
Łukasz Brocki ◽  
Krzysztof Marasek

Abstract This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.


In this study, it is presented a new hybrid model based on deep neural networks to predict the direction and magnitude of the Forex market movement in the short term. The overall model presented is based on the scalping strategy and is provided for high frequency transactions. The proposed hybrid model is based on a combination of three models based on deep neural networks. The first model is a deep neural network with a multi-input structure consisting of a combination of Long Short Term Memory layers. The second model is a deep neural network with a multi-input structure made of a combination of one-dimensional Convolutional Neural network layers. The third model has a simpler structure and is a multi-input model of the Multi-Layer Perceptron layers. The overall model was also a model based on the majority vote of three top models. This study showed that models based on Long Short-Term Memory layers provided better results than the other models and even hybrid models with more than 70% accurate.


Sign in / Sign up

Export Citation Format

Share Document