Parallel and Scalable Deep Learning Algorithms for High Performance Computing Architectures

Author(s):  
Sunil Pandey ◽  
Naresh Kumar Nagwani ◽  
Shrish Verma
2017 ◽  
Vol 9 (11) ◽  
pp. 1105 ◽  
Author(s):  
Uttam Kumar ◽  
Sangram Ganguly ◽  
Ramakrishna R. Nemani ◽  
Kumar S Raja ◽  
Cristina Milesi ◽  
...  

Author(s):  
Rene Avalloni de Morais ◽  
Baidya Nath Saha

Deep learning algorithms have received dramatic progress in the area of natural language processing and automatic human speech recognition. However, the accuracy of the deep learning algorithms depends on the amount and quality of the data and training deep models requires high-performance computing resources. In this backdrop, this paper adresses an end-to-end speech recognition system where we finetune Mozilla DeepSpeech architecture using two different datasets: LibriSpeech clean dataset and Harvard speech dataset. We train Long Short Term Memory (LSTM) based deep Recurrent Neural Netowrk (RNN) models in Google Colab platform and use their GPU resources. Extensive experimental results demonstrate that Mozilla DeepSpeech model could be fine-tuned for different audio datasets to recognize speeches successfully.


Sign in / Sign up

Export Citation Format

Share Document