Deep neural networks trained with heavier data augmentation learn features closer to representations in hIT

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann
2019 ◽  
Vol 134 ◽  
pp. 53-65 ◽  
Author(s):  
Paolo Vecchiotti ◽  
Giovanni Pepe ◽  
Emanuele Principi ◽  
Stefano Squartini

2021 ◽  
Vol 5 (3) ◽  
pp. 1-10
Author(s):  
Melih Öz ◽  
Taner Danışman ◽  
Melih Günay ◽  
Esra Zekiye Şanal ◽  
Özgür Duman ◽  
...  

The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.


2020 ◽  
Vol 12 (15) ◽  
pp. 2353
Author(s):  
Henning Heiselberg

Classification of ships and icebergs in the Arctic in satellite images is an important problem. We study how to train deep neural networks for improving the discrimination of ships and icebergs in multispectral satellite images. We also analyze synthetic-aperture radar (SAR) images for comparison. The annotated datasets of ships and icebergs are collected from multispectral Sentinel-2 data and taken from the C-CORE dataset of Sentinel-1 SAR images. Convolutional Neural Networks with a range of hyperparameters are tested and optimized. Classification accuracies are considerably better for deep neural networks than for support vector machines. Deeper neural nets improve the accuracy per epoch but at the cost of longer processing time. Extending the datasets with semi-supervised data from Greenland improves the accuracy considerably whereas data augmentation by rotating and flipping the images has little effect. The resulting classification accuracies for ships and icebergs are 86% for the SAR data and 96% for the MSI data due to the better resolution and more multispectral bands. The size and quality of the datasets are essential for training the deep neural networks, and methods to improve them are discussed. The reduced false alarm rates and exploitation of multisensory data are important for Arctic search and rescue services.


2021 ◽  
Author(s):  
Helin Wang ◽  
Yuexian Zou ◽  
Wenwu Wang

In this paper, we present SpecAugment++, a novel data aug-mentation method for deep neural networks based acousticscene classification (ASC). Different from other popular dataaugmentation methods such as SpecAugment and mixup thatonly work on the input space, SpecAugment++ is applied toboth the input space and the hidden space of the deep neuralnetworks to enhance the input and the intermediate feature rep-resentations. For an intermediate hidden state, the augmentationtechniques consist of masking blocks of frequency channels andmasking blocks of time frames, which improve generalizationby enabling a model to attend not only to the most discrimina-tive parts of the feature, but also the entire parts. Apart fromusing zeros for masking, we also examine two approaches formasking based on the use of other samples within the mini-batch, which helps introduce noises to the networks to makethem more discriminative for classification. The experimentalresults on the DCASE 2018 Task1 dataset and DCASE 2019Task1 dataset show that our proposed method can obtain3.6%and4.7%accuracy gains over a strong baseline without aug-mentation (i.e.CP-ResNet) respectively, and outperforms otherprevious data augmentation methods.


2020 ◽  
Vol 32 (19) ◽  
pp. 15503-15531 ◽  
Author(s):  
Xiang Wang ◽  
Kai Wang ◽  
Shiguo Lian

Sign in / Sign up

Export Citation Format

Share Document