Deep Learning for Three-Dimensional Volumetric Recovery of Cloud Fields

Author(s):  
Yael Sde-Chen ◽  
Yoav Y. Schechner ◽  
Vadim Holodovsky ◽  
Eshkol Eytan

<p>Clouds are a key factor in Earth's energy budget and thus significantly affect climate and weather predictions. These effects are dominated by shallow warm clouds (shown by Sherwood et al., 2014, Zelinka et al., 2020) which tend to be small and heterogenous. Therefore, remote sensing of clouds and three-dimensional (3D) volumetric reconstruction of their internal properties are of significant importance.</p><p>Recovery of the volumetric information of the clouds relies on 3D radiative transfer, that models 3D multiple scattering. This model is complex and nonlinear. Thus, inverting the model poses a major challenge and typically requires using a simplification. A common relaxation assumes that clouds are horizontally uniform and infinitely broad, leading to one-dimensional modeling. However, generally this assumption is invalid since clouds are naturally highly heterogeneous. A novel alternative is to perform cloud retrieval by developing tools of 3D scattering tomography. Then, multiple satellite images of the clouds are acquired from different points of view. For example, simultaneous multi-view radiometric images of clouds are proposed by the CloudCT project, funded by the ERC. Unfortunately, 3D scattering tomography require high computational resources. This results, in practice, in slow run times and prevents large scale analysis. Moreover, existing scattering tomography is based on iterative optimization, which is sensitive to initialization.</p><p>In this work we introduce a deep neural network for 3D volumetric reconstruction of clouds. In recent years, supervised learning using deep neural networks has led to remarkable results in various fields ranging from computer vision to medical imaging. However, these deep learning techniques have not been extensively studied in the context of volumetric atmospheric science and specifically cloud research.</p><p>We present a convolutional neural network (CNN) whose architecture is inspired by the physical nature of clouds. Due to the lack of real-world datasets, we train the network in a supervised manner using a physics-based simulator that generates realistic volumetric cloud fields. In addition, we propose a hybrid approach, which combines the proposed neural network with an iterative physics-based optimization technique.</p><p>We demonstrate the recovery performance of our proposed method in cloud fields. In a single cloud-scale, our resulting quality is comparable to state-of-the-art methods, while run time improves by orders of magnitude. In contrast to existing physics-based methods, our network offers scalability, which enables the reconstruction of wider cloud fields. Finally, we show that the hybrid approach leads to improved retrieval in a fast process.</p>

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2868
Author(s):  
Wenxuan Zhao ◽  
Yaqin Zhao ◽  
Liqi Feng ◽  
Jiaxi Tang

The purpose of image dehazing is the reduction of the image degradation caused by suspended particles for supporting high-level visual tasks. Besides the atmospheric scattering model, convolutional neural network (CNN) has been used for image dehazing. However, the existing image dehazing algorithms are limited in face of unevenly distributed haze and dense haze in real-world scenes. In this paper, we propose a novel end-to-end convolutional neural network called attention enhanced serial Unet++ dehazing network (AESUnet) for single image dehazing. We attempt to build a serial Unet++ structure that adopts a serial strategy of two pruned Unet++ blocks based on residual connection. Compared with the simple Encoder–Decoder structure, the serial Unet++ module can better use the features extracted by encoders and promote contextual information fusion in different resolutions. In addition, we take some improvement measures to the Unet++ module, such as pruning, introducing the convolutional module with ResNet structure, and a residual learning strategy. Thus, the serial Unet++ module can generate more realistic images with less color distortion. Furthermore, following the serial Unet++ blocks, an attention mechanism is introduced to pay different attention to haze regions with different concentrations by learning weights in the spatial domain and channel domain. Experiments are conducted on two representative datasets: the large-scale synthetic dataset RESIDE and the small-scale real-world datasets I-HAZY and O-HAZY. The experimental results show that the proposed dehazing network is not only comparable to state-of-the-art methods for the RESIDE synthetic datasets, but also surpasses them by a very large margin for the I-HAZY and O-HAZY real-world dataset.


Author(s):  
Brahim Jabir ◽  
Noureddine Falih

<span>In precision farming, identifying weeds is an essential first step in planning an integrated pest management program in cereals. By knowing the species present, we can learn about the types of herbicides to use to control them, especially in non-weeding crops where mechanical methods that are not effective (tillage, hand weeding, and hoeing and mowing). Therefore, using the deep learning based on convolutional neural network (CNN) will help to automatically identify weeds and then an intelligent system comes to achieve a localized spraying of the herbicides avoiding their large-scale use, preserving the environment. In this article we propose a smart system based on object detection models, implemented on a Raspberry, seek to identify the presence of relevant objects (weeds) in an area (wheat crop) in real time and classify those objects for decision support including spot spray with a chosen herbicide in accordance to the weed detected.</span>


Nanomaterials ◽  
2018 ◽  
Vol 8 (9) ◽  
pp. 739 ◽  
Author(s):  
Hiroki Itasaka ◽  
Ken-Ichi Mimura ◽  
Kazumi Kato

Assembly of nanocrystals into ordered two- or three-dimensional arrays is an essential technology to achieve their application in novel functional devices. Among a variety of assembly techniques, evaporation-induced self-assembly (EISA) is one of the prospective approaches because of its simplicity. Although EISA has shown its potential to form highly ordered nanocrystal arrays, the formation of uniform nanocrystal arrays over large areas remains a challenging subject. Here, we introduce a new EISA method and demonstrate the formation of large-scale highly ordered monolayers of barium titanate (BaTiO3, BT) nanocubes at the air-water interface. In our method, the addition of an extra surfactant to a water surface assists the EISA of BT nanocubes with a size of 15–20 nm into a highly ordered arrangement. We reveal that the compression pressure exerted by the extra surfactant on BT nanocubes during the solvent evaporation is a key factor in the self-assembly in our method. The BT nanocube monolayers transferred to substrates have sizes up to the millimeter scale and a high out-of-plane crystal orientation, containing almost no microcracks and voids.


Author(s):  
Zuo Dai ◽  
Jianzhong Cha

Abstract Artificial Neural Networks, particularly the Hopfield-Tank network, have been effectively applied to the solution of a variety of tasks formulated as large scale combinatorial optimization problems, such as Travelling Salesman Problem and N Queens Problem [1]. The problem of optimally packing a set of geometries into a space with finite dimensions arises frequently in many applications and is far difficult than general NP-complete problems listed in [2]. Until now within accepted time limit, it can only be solved with heuristic methods for very simple cases (e.g. 2D layout). In this paper we propose a heuristic-based Hopfield neural network designed to solve the rectangular packing problems in two dimensions, which is still NP-complete [3]. By comparing the adequacy and efficiency of the results with that obtained by several other exact and heuristic approaches, it has been concluded that the proposed method has great potential in solving 2D packing problems.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jordan Ott ◽  
Mike Pritchard ◽  
Natalie Best ◽  
Erik Linstead ◽  
Milan Curcic ◽  
...  

Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset.


2019 ◽  
Vol 277 ◽  
pp. 02007
Author(s):  
Qingzhi Zhang ◽  
Panfeng Wu ◽  
Xiaohui Du ◽  
Hualiang Sun ◽  
Lijia Yu

With the extensive application of deep learning in the field of human rehabilitation, skeleton based rehabilitation recognition is becoming more and more concerned with large-scale bone data sets. The key factor of this task is the two intra frame representations of the combined co-and the inter-frame. In this paper, an inter frame representation method based on RNN is proposed. Pointtion of each joint is joint-coded they are assembled into semantic both spatial and temporal domains.we introduce a global spatial aggregation which is able to learn superior joint co features over local aggregation.


Sign in / Sign up

Export Citation Format

Share Document