scholarly journals Human EEG and Recurrent Neural Networks Exhibit Common Temporal Dynamics During Speech Recognition

2021 ◽  
Vol 15 ◽  
Author(s):  
Saeedeh Hashemnia ◽  
Lukas Grasse ◽  
Shweta Soni ◽  
Matthew S. Tata

Recent deep-learning artificial neural networks have shown remarkable success in recognizing natural human speech, however the reasons for their success are not entirely understood. Success of these methods might be because state-of-the-art networks use recurrent layers or dilated convolutional layers that enable the network to use a time-dependent feature space. The importance of time-dependent features in human cortical mechanisms of speech perception, measured by electroencephalography (EEG) and magnetoencephalography (MEG), have also been of particular recent interest. It is possible that recurrent neural networks (RNNs) achieve their success by emulating aspects of cortical dynamics, albeit through very different computational mechanisms. In that case, we should observe commonalities in the temporal dynamics of deep-learning models, particularly in recurrent layers, and brain electrical activity (EEG) during speech perception. We explored this prediction by presenting the same sentences to both human listeners and the Deep Speech RNN and considered the temporal dynamics of the EEG and RNN units for identical sentences. We tested whether the recently discovered phenomenon of envelope phase tracking in the human EEG is also evident in RNN hidden layers. We furthermore predicted that the clustering of dissimilarity between model representations of pairs of stimuli would be similar in both RNN and EEG dynamics. We found that the dynamics of both the recurrent layer of the network and human EEG signals exhibit envelope phase tracking with similar time lags. We also computed the representational distance matrices (RDMs) of brain and network responses to speech stimuli. The model RDMs became more similar to the brain RDM when going from early network layers to later ones, and eventually peaked at the recurrent layer. These results suggest that the Deep Speech RNN captures a representation of temporal features of speech in a manner similar to human brain.

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4493
Author(s):  
Rui Silva ◽  
António Araújo

Condition monitoring is a fundamental part of machining, as well as other manufacturing processes where, generally, there are parts that wear out and have to be replaced. Devising proper condition monitoring has been a concern of many researchers, but there is still a lack of robustness and efficiency, most often hindered by the system’s complexity or otherwise limited by the inherent noisy signals, a characteristic of industrial processes. The vast majority of condition monitoring approaches do not take into account the temporal sequence when modelling and hence lose an intrinsic part of the context of an actual time-dependent process, fundamental to processes such as cutting. The proposed system uses a multisensory approach to gather information from the cutting process, which is then modelled by a recurrent neural network, capturing the evolutive pattern of wear over time. The system was tested with realistic cutting conditions, and the results show great effectiveness and accuracy with just a few cutting tests. The use of recurrent neural networks demonstrates the potential of such an approach for other time-dependent industrial processes under noisy conditions.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009345
Author(s):  
Zhengqiao Zhao ◽  
Stephen Woloszynek ◽  
Felix Agbavor ◽  
Joshua Chang Mell ◽  
Bahrad A. Sokhansanj ◽  
...  

Recurrent neural networks with memory and attention mechanisms are widely used in natural language processing because they can capture short and long term sequential information for diverse tasks. We propose an integrated deep learning model for microbial DNA sequence data, which exploits convolutional neural networks, recurrent neural networks, and attention mechanisms to predict taxonomic classifications and sample-associated attributes, such as the relationship between the microbiome and host phenotype, on the read/sequence level. In this paper, we develop this novel deep learning approach and evaluate its application to amplicon sequences. We apply our approach to short DNA reads and full sequences of 16S ribosomal RNA (rRNA) marker genes, which identify the heterogeneity of a microbial community sample. We demonstrate that our implementation of a novel attention-based deep network architecture, Read2Pheno, achieves read-level phenotypic prediction. Training Read2Pheno models will encode sequences (reads) into dense, meaningful representations: learned embedded vectors output from the intermediate layer of the network model, which can provide biological insight when visualized. The attention layer of Read2Pheno models can also automatically identify nucleotide regions in reads/sequences which are particularly informative for classification. As such, this novel approach can avoid pre/post-processing and manual interpretation required with conventional approaches to microbiome sequence classification. We further show, as proof-of-concept, that aggregating read-level information can robustly predict microbial community properties, host phenotype, and taxonomic classification, with performance at least comparable to conventional approaches. An implementation of the attention-based deep learning network is available at https://github.com/EESI/sequence_attention (a python package) and https://github.com/EESI/seq2att (a command line tool).


Author(s):  
Hajar Maseeh Yasin ◽  
Adnan Mohsin Abdulazeez

Image compression is an essential technology for encoding and improving various forms of images in the digital era. The inventors have extended the principle of deep learning to the different states of neural networks as one of the most exciting machine learning methods to show that it is the most versatile way to analyze, classify, and compress images. Many neural networks are required for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks, and convolution neural networks. Therefore, this review paper discussed how to apply the rule of deep learning to various neural networks to obtain better compression in the image with high accuracy and minimize loss and superior visibility of the image. Therefore, deep learning and its application to different types of images in a justified manner with distinct analysis to obtain these things need deep learning.


Sign in / Sign up

Export Citation Format

Share Document