scholarly journals Interaffection of Multiple Datasets with Neural Networks in Speech Emotion Recognition

2020 ◽  
Author(s):  
Ronnypetson Da Silva ◽  
Valter M. Filho ◽  
Mario Souza

Many works that apply Deep Neural Networks (DNNs) to Speech Emotion Recognition (SER) use single datasets or train and evaluate the models separately when using multiple datasets. Those datasets are constructed with specific guidelines and the subjective nature of the labels for SER makes it difficult to obtain robust and general models. We investigate how DNNs learn shared representations for different datasets in both multi-task and unified setups. We also analyse how each dataset benefits from others in different combinations of datasets and popular neural network architectures. We show that the longstanding belief of more data resulting in more general models doesn’t always hold for SER, as different dataset and meta-parameter combinations hold the best result for each of the analysed datasets.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Anand Ramachandran ◽  
Steven S. Lumetta ◽  
Eric W. Klee ◽  
Deming Chen

Abstract Background Modern Next Generation- and Third Generation- Sequencing methods such as Illumina and PacBio Circular Consensus Sequencing platforms provide accurate sequencing data. Parallel developments in Deep Learning have enabled the application of Deep Neural Networks to variant calling, surpassing the accuracy of classical approaches in many settings. DeepVariant, arguably the most popular among such methods, transforms the problem of variant calling into one of image recognition where a Deep Neural Network analyzes sequencing data that is formatted as images, achieving high accuracy. In this paper, we explore an alternative approach to designing Deep Neural Networks for variant calling, where we use meticulously designed Deep Neural Network architectures and customized variant inference functions that account for the underlying nature of sequencing data instead of converting the problem to one of image recognition. Results Results from 27 whole-genome variant calling experiments spanning Illumina, PacBio and hybrid Illumina-PacBio settings suggest that our method allows vastly smaller Deep Neural Networks to outperform the Inception-v3 architecture used in DeepVariant for indel and substitution-type variant calls. For example, our method reduces the number of indel call errors by up to 18%, 55% and 65% for Illumina, PacBio and hybrid Illumina-PacBio variant calling respectively, compared to a similarly trained DeepVariant pipeline. In these cases, our models are between 7 and 14 times smaller. Conclusions We believe that the improved accuracy and problem-specific customization of our models will enable more accurate pipelines and further method development in the field. HELLO is available at https://github.com/anands-repo/hello


2020 ◽  
Vol 32 (2) ◽  
Author(s):  
Marelie Hattingh Davel

No framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network architectures: fully-connected feedforward networks with rectified linear activations and a limited number of hidden layers. For such an architecture, we show how adding a summary layer to the network makes it more amenable to analysis, and allows us to define the conditions that are required to guarantee that a set of samples will all be classified correctly. This process does not describe the generalisation behaviour of these networks, but produces a number of metrics that are useful for probing their learning and generalisation behaviour. We support the analytical conclusions with empirical results, both to confirm that the mathematical guarantees hold in practice, and to demonstrate the use of the analysis process.


2019 ◽  
Vol 63 (7) ◽  
pp. 1031-1038
Author(s):  
Zongjie Ma ◽  
Abdul Sattar ◽  
Jun Zhou ◽  
Qingliang Chen ◽  
Kaile Su

Abstract Dropout has been proven to be an effective technique for regularizing and preventing the co-adaptation of neurons in deep neural networks (DNN). It randomly drops units with a probability of p during the training stage of DNN to avoid overfitting. The working mechanism of dropout can be interpreted as approximately and exponentially combining many different neural network architectures efficiently, leading to a powerful ensemble. In this work, we propose a novel diversification strategy for dropout, which aims at generating more different neural network architectures in less numbers of iterations. The dropped units in the last forward propagation will be marked. Then the selected units for dropping in the current forward propagation will be retained if they have been marked in the last forward propagation, i.e., we only mark the units from the last forward propagation. We call this new regularization scheme Tabu dropout, whose significance lies in that it does not have extra parameters compared with the standard dropout strategy and is computationally efficient as well. Experiments conducted on four public datasets show that Tabu dropout improves the performance of the standard dropout, yielding better generalization capability.


Author(s):  
Dr. Abul Bashar

The deep learning being a subcategory of the machine learning follows the human instincts of learning by example to produce accurate results. The deep learning performs training to the computer frame work to directly classify the tasks from the documents available either in the form of the text, image, or the sound. Most often the deep learning utilizes the neural network to perform the accurate classification and is referred as the deep neural networks; one of the most common deep neural networks used in a broader range of applications is the convolution neural network that provides an automated way of feature extraction by learning the features directly from the images or the text unlike the machine learning that extracts the features manually. This enables the deep learning neural networks to have a state of art accuracy that mostly expels even the human performance. So the paper is to present the survey on the deep learning neural network architectures utilized in various applications for having an accurate classification with an automated feature extraction.


Author(s):  
Syed Asif Ahmad Qadri ◽  
Teddy Surya Gunawan ◽  
Taiba Majid Wani ◽  
Eliathamby Ambikairajah ◽  
Mira Kartiwi ◽  
...  

Author(s):  
Vikas Verma ◽  
Alex Lamb ◽  
Juho Kannala ◽  
Yoshua Bengio ◽  
David Lopez-Paz

We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark dataset.


Sign in / Sign up

Export Citation Format

Share Document