scholarly journals Bayesian Deep Neural Networks for Supervised Learning of Single-View Depth

Author(s):  
Javier Rodriguez-Puigvert ◽  
Ruben Martinez-Cantin ◽  
Javier Civera
Author(s):  
Vikas Verma ◽  
Alex Lamb ◽  
Juho Kannala ◽  
Yoshua Bengio ◽  
David Lopez-Paz

We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark dataset.


Author(s):  
Yusuke Taguchi ◽  
Hideitsu Hino ◽  
Keisuke Kameyama

AbstractThere are many situations in supervised learning where the acquisition of data is very expensive and sometimes determined by a user’s budget. One way to address this limitation is active learning. In this study, we focus on a fixed budget regime and propose a novel active learning algorithm for the pool-based active learning problem. The proposed method performs active learning with a pre-trained acquisition function so that the maximum performance can be achieved when the number of data that can be acquired is fixed. To implement this active learning algorithm, the proposed method uses reinforcement learning based on deep neural networks as as a pre-trained acquisition function tailored for the fixed budget situation. By using the pre-trained deep Q-learning-based acquisition function, we can realize the active learner which selects a sample for annotation from the pool of unlabeled samples taking the fixed-budget situation into account. The proposed method is experimentally shown to be comparable with or superior to existing active learning methods, suggesting the effectiveness of the proposed approach for the fixed-budget active learning.


2021 ◽  
Vol 12 ◽  
Author(s):  
Patrick Thiam ◽  
Heinke Hihn ◽  
Daniel A. Braun ◽  
Hans A. Kestler ◽  
Friedhelm Schwenker

Traditional pain assessment approaches ranging from self-reporting methods, to observational scales, rely on the ability of an individual to accurately assess and successfully report observed or experienced pain episodes. Automatic pain assessment tools are therefore more than desirable in cases where this specific ability is negatively affected by various psycho-physiological dispositions, as well as distinct physical traits such as in the case of professional athletes, who usually have a higher pain tolerance as regular individuals. Hence, several approaches have been proposed during the past decades for the implementation of an autonomous and effective pain assessment system. These approaches range from more conventional supervised and semi-supervised learning techniques applied on a set of carefully hand-designed feature representations, to deep neural networks applied on preprocessed signals. Some of the most prominent advantages of deep neural networks are the ability to automatically learn relevant features, as well as the inherent adaptability of trained deep neural networks to related inference tasks. Yet, some significant drawbacks such as requiring large amounts of data to train deep models and over-fitting remain. Both of these problems are especially relevant in pain intensity assessment, where labeled data is scarce and generalization is of utmost importance. In the following work we address these shortcomings by introducing several novel multi-modal deep learning approaches (characterized by specific supervised, as well as self-supervised learning techniques) for the assessment of pain intensity based on measurable bio-physiological data. While the proposed supervised deep learning approach is able to attain state-of-the-art inference performances, our self-supervised approach is able to significantly improve the data efficiency of the proposed architecture by automatically generating physiological data and simultaneously performing a fine-tuning of the architecture, which has been previously trained on a significantly smaller amount of data.


2020 ◽  
Vol 17 (1) ◽  
pp. 182-188 ◽  
Author(s):  
Mohit Sewak ◽  
Sanjay K. Sahay ◽  
Hemant Rathore

The recent wide applications of deep learning in multiple fields has shown a great progress, but to perform optimally, it requires the adjustment of various architectural features and hyper-parameters. Moreover, deep learning could be used with multiple varieties of architecture aimed at different objectives, e.g., autoencoders are popular for un-supervised learning applications for reducing the dimensionality of the dataset. Similarly, deep neural networks are popular for supervised learning applications viz., classification, regression, etc. Besides the type of deep learning architecture, some other decision criteria and parameter selection decisions are required for determining each layer size, number of layers, activation and loss functions for different layers, optimizer algorithm, regularization, etc. Thus, this paper aims to cover different choices available under each of these major and minor decision criteria for devising a neural network and to train it optimally for achieving the objectives effectively, e.g., malware detection, natural language processing, image recognition, etc.


2018 ◽  
Vol 2018 (15) ◽  
pp. 131-1-1316 ◽  
Author(s):  
Yan Zhang ◽  
G. M. Dilshan Godaliyadda ◽  
Nicola Ferrier ◽  
Emine B. Gulsoy ◽  
Charles A. Bouman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document