scholarly journals Evaluation of Deep Learning-Based Neural Network Methods for Cloud Detection and Segmentation

Energies ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 6156
Author(s):  
Stefan Hensel ◽  
Marin B. Marinov ◽  
Michael Koch ◽  
Dimitar Arnaudov

This paper presents a systematic approach for accurate short-time cloud coverage prediction based on a machine learning (ML) approach. Based on a newly built omnidirectional ground-based sky camera system, local training and evaluation data sets were created. These were used to train several state-of-the-art deep neural networks for object detection and segmentation. For this purpose, the camera-generated a full hemispherical image every 30 min over two months in daylight conditions with a fish-eye lens. From this data set, a subset of images was selected for training and evaluation according to various criteria. Deep neural networks, based on the two-stage R-CNN architecture, were trained and compared with a U-net segmentation approach implemented by CloudSegNet. All chosen deep networks were then evaluated and compared according to the local situation.

Author(s):  
Aydin Ayanzadeh ◽  
Sahand Vahidnia

In this paper, we leverage state of the art models on Imagenet data-sets. We use the pre-trained model and learned weighs to extract the feature from the Dog breeds identification data-set. Afterwards, we applied fine-tuning and dataaugmentation to increase the performance of our test accuracy in classification of dog breeds datasets. The performance of the proposed approaches are compared with the state of the art models of Image-Net datasets such as ResNet-50, DenseNet-121, DenseNet-169 and GoogleNet. we achieved 89.66% , 85.37% 84.01% and 82.08% test accuracy respectively which shows thesuperior performance of proposed method to the previous works on Stanford dog breeds datasets.


mSphere ◽  
2020 ◽  
Vol 5 (5) ◽  
Author(s):  
Artur Yakimovich ◽  
Moona Huttunen ◽  
Jerzy Samolej ◽  
Barbara Clough ◽  
Nagisa Yoshida ◽  
...  

ABSTRACT The use of deep neural networks (DNNs) for analysis of complex biomedical images shows great promise but is hampered by a lack of large verified data sets for rapid network evolution. Here, we present a novel strategy, termed “mimicry embedding,” for rapid application of neural network architecture-based analysis of pathogen imaging data sets. Embedding of a novel host-pathogen data set, such that it mimics a verified data set, enables efficient deep learning using high expressive capacity architectures and seamless architecture switching. We applied this strategy across various microbiological phenotypes, from superresolved viruses to in vitro and in vivo parasitic infections. We demonstrate that mimicry embedding enables efficient and accurate analysis of two- and three-dimensional microscopy data sets. The results suggest that transfer learning from pretrained network data may be a powerful general strategy for analysis of heterogeneous pathogen fluorescence imaging data sets. IMPORTANCE In biology, the use of deep neural networks (DNNs) for analysis of pathogen infection is hampered by a lack of large verified data sets needed for rapid network evolution. Artificial neural networks detect handwritten digits with high precision thanks to large data sets, such as MNIST, that allow nearly unlimited training. Here, we developed a novel strategy we call mimicry embedding, which allows artificial intelligence (AI)-based analysis of variable pathogen-host data sets. We show that deep learning can be used to detect and classify single pathogens based on small differences.


2018 ◽  
Vol 6 (3) ◽  
pp. 134-146
Author(s):  
Daniil Igorevich Mikhalchenko ◽  
Arseniy Ivin ◽  
Dmitrii Malov

Purpose Single image depth prediction allows to extract depth information from a usual 2D image without usage of special sensors such as laser sensors, stereo cameras, etc. The purpose of this paper is to solve the problem of obtaining depth information from 2D image by applying deep neural networks (DNNs). Design/methodology/approach Several experiments and topologies are presented: DNN that uses three inputs—sequence of 2D images from videostream and DNN that uses only one input. However, there is no data set, that contains videostream and corresponding depth maps for every frame. So technique of creating data sets using the Blender software is presented in this work. Findings Despite the problem of an insufficient amount of available data sets, the problem of overfitting was encountered. Although created models work on the data sets, they are still overfitted and cannot predict correct depth map for the random images, that were included into the data sets. Originality/value Existing techniques of depth images creation are tested, using DNN.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1367
Author(s):  
Raghida El El Saj ◽  
Ehsan Sedgh Sedgh Gooya ◽  
Ayman Alfalou ◽  
Mohamad Khalil

Privacy-preserving deep neural networks have become essential and have attracted the attention of many researchers due to the need to maintain the privacy and the confidentiality of personal and sensitive data. The importance of privacy-preserving networks has increased with the widespread use of neural networks as a service in unsecured cloud environments. Different methods have been proposed and developed to solve the privacy-preserving problem using deep neural networks on encrypted data. In this article, we reviewed some of the most relevant and well-known computational and perceptual image encryption methods. These methods as well as their results have been presented, compared, and the conditions of their use, the durability and robustness of some of them against attacks, have been discussed. Some of the mentioned methods have demonstrated an ability to hide information and make it difficult for adversaries to retrieve it while maintaining high classification accuracy. Based on the obtained results, it was suggested to develop and use some of the cited privacy-preserving methods in applications other than classification.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


1998 ◽  
Vol 27 (3) ◽  
pp. 351-369 ◽  
Author(s):  
MICHAEL NOBLE ◽  
SIN YI CHEUNG ◽  
GEORGE SMITH

This article briefly reviews American and British literature on welfare dynamics and examines the concepts of welfare dependency and ‘dependency culture’ with particular reference to lone parents. Using UK benefit data sets, the welfare dynamics of lone mothers are examined to explore the extent to which they inform the debates. Evidence from Housing Benefits data show that even over a relatively short time period, there is significant turnover in the benefits-dependent lone parent population with movement in and out of income support as well as movement into other family structures. Younger lone parents and owner-occupiers tend to leave the data set while older lone parents and council tenants are most likely to stay. Some owner-occupier lone parents may be relatively well off and on income support for a relatively short time between separation and a financial settlement being reached. They may also represent a more highly educated and highly skilled group with easier access to the labour market than renters. Any policy moves paralleling those in the United States to time limit benefit will disproportionately affect older lone parents.


2020 ◽  
Vol 34 (04) ◽  
pp. 5620-5627 ◽  
Author(s):  
Murat Sensoy ◽  
Lance Kaplan ◽  
Federico Cerutti ◽  
Maryam Saleki

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.


2021 ◽  
pp. 1-17
Author(s):  
Luis Sa-Couto ◽  
Andreas Wichert

Abstract Convolutional neural networks (CNNs) evolved from Fukushima's neocognitron model, which is based on the ideas of Hubel and Wiesel about the early stages of the visual cortex. Unlike other branches of neocognitron-based models, the typical CNN is based on end-to-end supervised learning by backpropagation and removes the focus from built-in invariance mechanisms, using pooling not as a way to tolerate small shifts but as a regularization tool that decreases model complexity. These properties of end-to-end supervision and flexibility of structure allow the typical CNN to become highly tuned to the training data, leading to extremely high accuracies on typical visual pattern recognition data sets. However, in this work, we hypothesize that there is a flip side to this capability, a hidden overfitting. More concretely, a supervised, backpropagation based CNN will outperform a neocognitron/map transformation cascade (MTCCXC) when trained and tested inside the same data set. Yet if we take both models trained and test them on the same task but on another data set (without retraining), the overfitting appears. Other neocognitron descendants like the What-Where model go in a different direction. In these models, learning remains unsupervised, but more structure is added to capture invariance to typical changes. Knowing that, we further hypothesize that if we repeat the same experiments with this model, the lack of supervision may make it worse than the typical CNN inside the same data set, but the added structure will make it generalize even better to another one. To put our hypothesis to the test, we choose the simple task of handwritten digit classification and take two well-known data sets of it: MNIST and ETL-1. To try to make the two data sets as similar as possible, we experiment with several types of preprocessing. However, regardless of the type in question, the results align exactly with expectation.


2019 ◽  
Vol 154 ◽  
pp. 369-376 ◽  
Author(s):  
Mehmet Oguz Kelek ◽  
Nurullah Calik ◽  
Tulay Yildirim

2018 ◽  
Vol 15 (9) ◽  
pp. 1451-1455 ◽  
Author(s):  
Grant J. Scott ◽  
Kyle C. Hagan ◽  
Richard A. Marcum ◽  
James Alex Hurt ◽  
Derek T. Anderson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document