scholarly journals Analyzing and Visualizing Deep Neural Networks for Speech Recognition with Saliency-Adjusted Neuron Activation Profiles

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1350
Author(s):  
Andreas Krug ◽  
Maral Ebrahimzadeh ◽  
Jost Alemann ◽  
Jens Johannsmeier ◽  
Sebastian Stober

Deep Learning-based Automatic Speech Recognition (ASR) models are very successful, but hard to interpret. To gain a better understanding of how Artificial Neural Networks (ANNs) accomplish their tasks, several introspection methods have been proposed. However, established introspection techniques are mostly designed for computer vision tasks and rely on the data being visually interpretable, which limits their usefulness for understanding speech recognition models. To overcome this limitation, we developed a novel neuroscience-inspired technique for visualizing and understanding ANNs, called Saliency-Adjusted Neuron Activation Profiles (SNAPs). SNAPs are a flexible framework to analyze and visualize Deep Neural Networks that does not depend on visually interpretable data. In this work, we demonstrate how to utilize SNAPs for understanding fully-convolutional ASR models. This includes visualizing acoustic concepts learned by the model and the comparative analysis of their representations in the model layers.

Author(s):  
Ramy Mounir ◽  
Redwan Alqasemi ◽  
Rajiv Dubey

This work focuses on the research related to enabling individuals with speech impairment to use speech-to-text software to recognize and dictate their speech. Automatic Speech Recognition (ASR) tends to be a challenging problem for researchers because of the wide range of speech variability. Some of the variabilities include different accents, pronunciations, speeds, volumes, etc. It is very difficult to train an end-to-end speech recognition model on data with speech impediment due to the lack of large enough datasets, and the difficulty of generalizing a speech disorder pattern on all users with speech impediments. This work highlights the different techniques used in deep learning to achieve ASR and how it can be modified to recognize and dictate speech from individuals with speech impediments.


2020 ◽  
Vol 10 (2) ◽  
pp. 57-65
Author(s):  
Kaan Karakose ◽  
Metin Bilgin

In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. Humans and animals learn much better when gradually presented in a meaningful order showing more concepts and complex samples rather than randomly presenting the information. The use of such training strategies in the context of artificial neural networks is called curriculum learning. In this study, a strategy was developed for curriculum learning. Using the CIFAR-10 and CIFAR-100 training sets, the last few layers of the pre-trained on ImageNet Xception model were trained to keep the training set knowledge in the model’s weight. Finally, a much smaller model was trained with the sample sorting methods presented using these difficulty levels. The findings obtained in this study show that the accuracy value generated when trained by the method we provided with the accuracy value trained with randomly mixed data was more than 1% for each epoch.   Keywords: Curriculum learning, model distillation, deep learning, academia, neural networks.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rama K. Vasudevan ◽  
Maxim Ziatdinov ◽  
Lukas Vlcek ◽  
Sergei V. Kalinin

AbstractDeep neural networks (‘deep learning’) have emerged as a technology of choice to tackle problems in speech recognition, computer vision, finance, etc. However, adoption of deep learning in physical domains brings substantial challenges stemming from the correlative nature of deep learning methods compared to the causal, hypothesis driven nature of modern science. We argue that the broad adoption of Bayesian methods incorporating prior knowledge, development of solutions with incorporated physical constraints and parsimonious structural descriptors and generative models, and ultimately adoption of causal models, offers a path forward for fundamental and applied research.


Author(s):  
Xuyến

Deep Neural Networks là một thuật toán dạy cho máy học, là phương pháp nâng cao của mạng nơ-ron nhân tạo (Artificial Neural Networks) nhiều tầng để học biểu diễn mô hình đối tượng. Bài báo trình bày phương pháp để phát hiện spike tự động, giải quyết bài toán cho các bác sỹ khi phân tích dữ liệu khổng lồ được thu thập từ bản ghi điện não để xác định một khu vực của não gây ra chứng động kinh. Hàng triệu mẫu được phân tích thủ công đã được đào tạo lại để tìm các gai liêp tiếp phát ra từ vùng não bị ảnh hưởng. Để đánh giá phương pháp đề xuất, tác giả đã xây dựng hệ thống trong đó sử dụng một số mô hình deep learning đưa vào thử nghiệm hỗ trợ các bác sỹ khám phát hiện và chẩn đoán sớm bệnh.


2021 ◽  
Vol 13 (0) ◽  
pp. 1-5
Author(s):  
Mantas Tamulionis

Methods based on artificial neural networks (ANN) are widely used in various audio signal processing tasks. This provides opportunities to optimize processes and save resources required for calculations. One of the main objects we need to get to numerically capture the acoustics of a room is the room impulse response (RIR). Increasingly, research authors choose not to record these impulses in a real room but to generate them using ANN, as this gives them the freedom to prepare unlimited-sized training datasets. Neural networks are also used to augment the generated impulses to make them similar to the ones actually recorded. The widest use of ANN so far is observed in the evaluation of the generated results, for example, in automatic speech recognition (ASR) tasks. This review also describes datasets of recorded RIR impulses commonly found in various studies that are used as training data for neural networks.


2021 ◽  
pp. 27-38
Author(s):  
Rafaela Carvalho ◽  
João Pedrosa ◽  
Tudor Nedelcu

AbstractSkin cancer is one of the most common types of cancer and, with its increasing incidence, accurate early diagnosis is crucial to improve prognosis of patients. In the process of visual inspection, dermatologists follow specific dermoscopic algorithms and identify important features to provide a diagnosis. This process can be automated as such characteristics can be extracted by computer vision techniques. Although deep neural networks can extract useful features from digital images for skin lesion classification, performance can be improved by providing additional information. The extracted pseudo-features can be used as input (multimodal) or output (multi-tasking) to train a robust deep learning model. This work investigates the multimodal and multi-tasking techniques for more efficient training, given the single optimization of several related tasks in the latter, and generation of better diagnosis predictions. Additionally, the role of lesion segmentation is also studied. Results show that multi-tasking improves learning of beneficial features which lead to better predictions, and pseudo-features inspired by the ABCD rule provide readily available helpful information about the skin lesion.


PLoS ONE ◽  
2018 ◽  
Vol 13 (10) ◽  
pp. e0205355 ◽  
Author(s):  
Doroteo T. Toledano ◽  
María Pilar Fernández-Gallego ◽  
Alicia Lozano-Diez

Sign in / Sign up

Export Citation Format

Share Document