scholarly journals Improving wheat yield estimates using data augmentation models and remotely sensed biophysical indices within deep neural networks in the Guanzhong Plain, PR China

2022 ◽  
Vol 192 ◽  
pp. 106616
Author(s):  
Jingqi Zhang ◽  
Huiren Tian ◽  
Pengxin Wang ◽  
Kevin Tansey ◽  
Shuyu Zhang ◽  
...  
2021 ◽  
Author(s):  
◽  
Lucas Ribeiro de Abreu

The RoboCup Soccer is one of the largest initiatives in the robotics field of research. This initiative considers the soccer match as a challenge for the robots and aims to win a match between humans versus robots by the year of 2050. The vision module is a critical system for the robots because it needs to quickly locate and classify objects of interest for the robot in order to generate the next best action. This work evaluates deep neural networks for the detection of the ball and robots. For such task, five convolutional neural networks architectures were trained for the experiment using data augmentation and transfer learning techniques. The models were evaluated in a test set, yielding promising results in precision and frames per second. The best model achieved an mAP of 0.98 and 14.7 frames per second, running on CPU


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2019 ◽  
Vol 134 ◽  
pp. 53-65 ◽  
Author(s):  
Paolo Vecchiotti ◽  
Giovanni Pepe ◽  
Emanuele Principi ◽  
Stefano Squartini

2021 ◽  
Vol 5 (3) ◽  
pp. 1-10
Author(s):  
Melih Öz ◽  
Taner Danışman ◽  
Melih Günay ◽  
Esra Zekiye Şanal ◽  
Özgür Duman ◽  
...  

The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.


Author(s):  
Shweta Dabetwar ◽  
Stephen Ekwaro-Osire ◽  
João Paulo Dias

Abstract Composite materials have enormous applications in various fields. Thus, it is important to have an efficient damage detection method to avoid catastrophic failures. Due to the existence of multiple damage modes and the availability of data in different formats, it is important to employ efficient techniques to consider all the types of damage. Deep neural networks were seen to exhibit the ability to address similar complex problems. The research question in this work is ‘Can data fusion improve damage classification using the convolutional neural network?’ The specific aims developed were to 1) assess the performance of image encoding algorithms, 2) classify the damage using data from separate experimental coupons, and 3) classify the damage using mixed data from multiple experimental coupons. Two different experimental measurements were taken from NASA Ames Prognostic Repository for Carbon Fiber Reinforced polymer. To use data fusion, the piezoelectric signals were converted into images using Gramian Angular Field (GAF) and Markov Transition Field. Using data fusion techniques, the input dataset was created for a convolutional neural network with three hidden layers to determine the damage states. The accuracies of all the image encoding algorithms were compared. The analysis showed that data fusion provided better results as it contained more information on the damages modes that occur in composite materials. Additionally, GAF was shown to perform the best. Thus, the combination of data fusion and deep neural network techniques provides an efficient method for damage detection of composite materials.


2020 ◽  
Vol 12 (15) ◽  
pp. 2353
Author(s):  
Henning Heiselberg

Classification of ships and icebergs in the Arctic in satellite images is an important problem. We study how to train deep neural networks for improving the discrimination of ships and icebergs in multispectral satellite images. We also analyze synthetic-aperture radar (SAR) images for comparison. The annotated datasets of ships and icebergs are collected from multispectral Sentinel-2 data and taken from the C-CORE dataset of Sentinel-1 SAR images. Convolutional Neural Networks with a range of hyperparameters are tested and optimized. Classification accuracies are considerably better for deep neural networks than for support vector machines. Deeper neural nets improve the accuracy per epoch but at the cost of longer processing time. Extending the datasets with semi-supervised data from Greenland improves the accuracy considerably whereas data augmentation by rotating and flipping the images has little effect. The resulting classification accuracies for ships and icebergs are 86% for the SAR data and 96% for the MSI data due to the better resolution and more multispectral bands. The size and quality of the datasets are essential for training the deep neural networks, and methods to improve them are discussed. The reduced false alarm rates and exploitation of multisensory data are important for Arctic search and rescue services.


2018 ◽  
Vol 8 (12) ◽  
pp. 2512 ◽  
Author(s):  
Ghouthi Boukli Hacene ◽  
Vincent Gripon ◽  
Nicolas Farrugia ◽  
Matthieu Arzel ◽  
Michel Jezequel

Deep learning-based methods have reached state of the art performances, relying on a large quantity of available data and computational power. Such methods still remain highly inappropriate when facing a major open machine learning problem, which consists of learning incrementally new classes and examples over time. Combining the outstanding performances of Deep Neural Networks (DNNs) with the flexibility of incremental learning techniques is a promising venue of research. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA is based on pre-trained DNNs as feature extractors, robust selection of feature vectors in subspaces using a nearest-class-mean based technique, majority votes and data augmentation at both the training and the prediction stages. Experiments on challenging vision datasets demonstrate the ability of the proposed method for low complexity incremental learning, while achieving significantly better accuracy than existing incremental counterparts.


Sign in / Sign up

Export Citation Format

Share Document