scholarly journals Extensibility of U-Net Neural Network Model for Hydrographic Feature Extraction and Implications for Hydrologic Modeling

2021 ◽  
Vol 13 (12) ◽  
pp. 2368
Author(s):  
Lawrence V. Stanislawski ◽  
Ethan J. Shavers ◽  
Shaowen Wang ◽  
Zhe Jiang ◽  
E. Lynn Usery ◽  
...  

Accurate maps of regional surface water features are integral for advancing ecologic, atmospheric and land development studies. The only comprehensive surface water feature map of Alaska is the National Hydrography Dataset (NHD). NHD features are often digitized representations of historic topographic map blue lines and may be outdated. Here we test deep learning methods to automatically extract surface water features from airborne interferometric synthetic aperture radar (IfSAR) data to update and validate Alaska hydrographic databases. U-net artificial neural networks (ANN) and high-performance computing (HPC) are used for supervised hydrographic feature extraction within a study area comprised of 50 contiguous watersheds in Alaska. Surface water features derived from elevation through automated flow-routing and manual editing are used as training data. Model extensibility is tested with a series of 16 U-net models trained with increasing percentages of the study area, from about 3 to 35 percent. Hydrography is predicted by each of the models for all watersheds not used in training. Input raster layers are derived from digital terrain models, digital surface models, and intensity images from the IfSAR data. Results indicate about 15 percent of the study area is required to optimally train the ANN to extract hydrography when F1-scores for tested watersheds average between 66 and 68. Little benefit is gained by training beyond 15 percent of the study area. Fully connected hydrographic networks are generated for the U-net predictions using a novel approach that constrains a D-8 flow-routing approach to follow U-net predictions. This work demonstrates the ability of deep learning to derive surface water feature maps from complex terrain over a broad area.

2020 ◽  
Vol 10 (16) ◽  
pp. 5582
Author(s):  
Xiaochen Yuan ◽  
Tian Huang

In this paper, a novel approach that uses a deep learning technique is proposed to detect and identify a variety of image operations. First, we propose the spatial domain-based nonlinear residual (SDNR) feature extraction method by constructing residual values from locally supported filters in the spatial domain. By applying minimum and maximum operators, diversity and nonlinearity are introduced; moreover, this construction brings nonsymmetry to the distribution of SDNR samples. Then, we propose applying a deep learning technique to the extracted SDNR features to detect and classify a variety of image operations. Many experiments have been conducted to verify the performance of the proposed approach, and the results indicate that the proposed method performs well in detecting and identifying the various common image postprocessing operations. Furthermore, comparisons between the proposed approach and the existing methods show the superiority of the proposed approach.


2019 ◽  
Author(s):  
Evan C Carter

Meta-analysis represents the promise of cumulative science--that each successive study brings us greater understanding of a given phenomenon. As such, meta-analyses are highly influential and gaining in popularity. However, there are well-known threats to the validity of meta-analytic results, such as processes like publication bias and questionable research practices which can cause researchers to massively overestimate the evidence in support of a claim. There are many statistical methods to correct for such bias, but no single method has been found to be robust in all realistic conditions. Here, I describe a method that merges statistical simulation and deep learning to achieve an unprecedented level of robust meta-analytic estimation in the face of numerous forms of bias and other historically problematic conditions. Furthermore, the resulting estimator, called DeepMA, has the unique property that it can easily evolve: As new conditions for which robustness is needed are identified, DeepMA can be re-trained to maintain high performance. Given the weaknesses that have been identified for meta-analysis, the current consensus is that it should serve as simply another data point, rather than residing at the top of the hierarchy of evidence. The novel approach I describe, however, holds the potential to eliminate these weaknesses, possibly solidifying meta-analysis as the platinum standard in scientific debate.


2019 ◽  
Vol 11 (5) ◽  
pp. 593 ◽  
Author(s):  
Andy Hardy ◽  
Georgina Ettritch ◽  
Dónall Cross ◽  
Pete Bunting ◽  
Francis Liywalii ◽  
...  

Providing timely and accurate maps of surface water is valuable for mapping malaria risk and targeting disease control interventions. Radar satellite remote sensing has the potential to provide this information but current approaches are not suitable for mapping African malarial mosquito aquatic habitats that tend to be highly dynamic, often with emergent vegetation. We present a novel approach for mapping both open and vegetated water bodies using serial Sentinel-1 imagery for Western Zambia. This region is dominated by the seasonally inundated Upper Zambezi floodplain that suffers from a number of public health challenges. The approach uses open source segmentation and machine learning (extra trees classifier), applied to training data that are automatically derived using freely available ancillary data. Refinement is implemented through a consensus approach and Otsu thresholding to eliminate false positives due to dry flat sandy areas. The results indicate a high degree of accuracy (mean overall accuracy 92% st dev 3.6) providing a tractable solution for operationally mapping water bodies in similar large river floodplain unforested environments. For the period studied, 70% of the total water extent mapped was attributed to vegetated water, highlighting the importance of mapping both open and vegetated water bodies for surface water mapping.


Author(s):  
Nikhil Krishnaswamy ◽  
Scott Friedman ◽  
James Pustejovsky

Many modern machine learning approaches require vast amounts of training data to learn new concepts; conversely, human learning often requires few examples—sometimes only one—from which the learner can abstract structural concepts. We present a novel approach to introducing new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms. The agent extracts spatial relations from a sparse set of noisy examples of block-based structures, and trains convolutional and sequential models of those relation sets. To create novel examples of similar structures, the agent begins placing blocks on a virtual table, uses a CNN to predict the most similar complete example structure after each placement, an LSTM to predict the most likely set of remaining moves needed to complete it, and recommends one using heuristic search. We verify that the agent learned the concept by observing its virtual block-building activities, wherein it ranks each potential subsequent action toward building its learned concept. We empirically assess this approach with human participants’ ratings of the block structures. Initial results and qualitative evaluations of structures generated by the trained agent show where it has generalized concepts from the training data, which heuristics perform best within the search space, and how we might improve learning and execution.


Author(s):  
Y. Li ◽  
M. Sakamoto ◽  
T. Shinohara ◽  
T. Satoh

Abstract. Label placement is one of the most essential tasks in the fields of cartography and geographic information systems. Numerous studies have been conducted on the automatic label placement for the past few decades. In this study, we focus on automatic label placement of area-feature, which has been relatively less studied than that of point-feature and line-feature. Most of the existing approaches have adopted a rule-based algorithm, and there are limitations in expressing the characteristics of label placement for area-features of various shapes utilizing handcrafted rules, criteria, objective functions, etc. Hence, we propose a novel approach for automatic label placement of area-feature based on deep learning. The aim of the proposed approach is to obtain the complex and implicit characteristics of area-feature label placement by manual operation directly and automatically from training data. First, the area-features with vector format are converted into a binary image. Then a key-point detection model, which simultaneously detect and localize specific key-points from an image, is applied to the binary image to estimate the candidate positions of labels. Finally, the final label placement positions for each area-feature are determined via simple post-process. To evaluate the proposed approach, the experiments with cadastral data were conducted. The experimental results show that the ratios of the estimation errors within 1.2 m (corresponding to one pixel of the input image) were 92.6% and 94.5% in the center and upper-left placement style, respectively. It implies that the proposed approach could place the labels for area-features automatically and accurately.


Author(s):  
Shamik Tiwari

Epiluminescence microscopy, more simply, dermatoscopy, entails a process using imaging to examine skin lesions. Various sorts of skin ailments, for example, melanoma, may be differentiated via these skin images. With the adverse possibilities of malignant melanoma causing death, an early diagnosis of melanoma can impact on the survival, length, and quality of life of the affected victim. Image recognition-based detection of different tissue classes is significant to implementing computer-aided diagnosis via histological images. Conventional image recognition require handcrafted feature extraction before the application of machine learning. Today, deep learning is offering significant choices with the progression of artificial learning to defeat the complications of the handcrafted feature extraction methods. A deep learning-based approach for the recognition of melanoma via the Capsule network is proposed here. The novel approach is compared with a multi-layer perceptron and convolution network with the Capsule network model yielding the classification accuracy at 98.9%.


Author(s):  
K. Suzuki ◽  
M. Claesen ◽  
H. Takeda ◽  
B. De Moor

Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed ‘shallow’ machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.


2021 ◽  
Vol 11 (21) ◽  
pp. 10462
Author(s):  
Omar Aboulola ◽  
Mashael Khayyat ◽  
Basma Al-Harbi ◽  
Mohammed Saleh Ali Muthanna ◽  
Ammar Muthanna ◽  
...  

The emerging technology of internet of connected vehicles (IoCV) introduced many new solutions for accident prevention and traffic safety by monitoring the behavior of drivers. In addition, monitoring drivers’ behavior to reduce accidents has attracted considerable attention from industry and academic researchers in recent years. However, there are still many issues that have not been addressed due to the lack of feature extraction. To this end, in this paper, we propose the multimodal driver analysis internet of connected vehicles (MODAL-IoCV) approach for analyzing drivers’ behavior using a deep learning method. This approach includes three consecutive phases. In the first phase, the hidden Markov model (HMM) is proposed to predict vehicle motion and lane changes. In the second phase, SqueezeNet is proposed to perform feature extraction from these classes. Lastly, in the final phase, tri-agent-based soft actor critic (TA-SAC) is proposed for recommendation and route planning, in which each driver is precisely handled by an edge node for personalized assistance. Finally, detailed experimental results prove that our proposed MODAL-IoCV method can achieve high performance in terms of latency, accuracy, false alarm rate, and motion prediction error compared to existing works.


2021 ◽  
Author(s):  
Gargi Mishra ◽  
Supriya Bajpai

It is highly challenging to obtain high performance with limited and unconstrained data in real time face recognition applications. Sparse Approximation is a fast and computationally efficient method for the above application as it requires no training time as compared to deep learning methods. It eliminates the training time by assuming that the test image can be approximated by the sum of individual contributions of the training images from different classes and the class with maximum contribution is closest to the test image. The efficiency of the Sparse Approximation method can be further increased by providing high quality features as input for classification. Hence, we propose to integrate pre-trained CNN architecture to extract the highly discriminative features from the image dataset for Sparse classification. The proposed approach provides better performance even for one training image per class in complex environment as compared to the existing methods. Highlight of the present approach is the results obtained for LFW dataset with one and thirteen training images per class are 84.86% and 96.14% respectively, whereas the existing deep learning methods use a large amount of training data to achieve comparable results.


Sign in / Sign up

Export Citation Format

Share Document