scholarly journals Synthesizing Disparate LiDAR and Satellite Datasets through Deep Learning to Generate Wall-to-Wall Regional Inventories for the Complex, Mixed-Species Forests of the Eastern United States

2021 ◽  
Vol 13 (24) ◽  
pp. 5113
Author(s):  
Elias Ayrey ◽  
Daniel J. Hayes ◽  
John B. Kilbride ◽  
Shawn Fraver ◽  
John A. Kershaw ◽  
...  

Light detection and ranging (LiDAR) has become a commonly-used tool for generating remotely-sensed forest inventories. However, LiDAR-derived forest inventories have remained uncommon at a regional scale due to varying parameters among LiDAR data acquisitions and the availability of sufficient calibration data. Here, we present a model using a 3-D convolutional neural network (CNN), a form of deep learning capable of scanning a LiDAR point cloud, combined with coincident satellite data (spectral, phenology, and disturbance history). We compared this approach to traditional modeling used for making forest predictions from LiDAR data (height metrics and random forest) and found that the CNN had consistently lower uncertainty. We then applied the CNN to public data over six New England states in the USA, generating maps of 14 forest attributes at a 10 m resolution over 85% of the region. Aboveground biomass estimates produced a root mean square error of 36 Mg ha−1 (44%) and were within the 97.5% confidence of independent county-level estimates for 33 of 38 or 86.8% of the counties examined. CNN predictions for stem density and percentage of conifer attributes were moderately successful, while predictions for detailed species groupings were less successful. The approach shows promise for improving the prediction of forest attributes from regional LiDAR data and for combining disparate LiDAR datasets into a common framework for large-scale estimation.

2019 ◽  
Author(s):  
Elias Ayrey ◽  
Daniel J. Hayes ◽  
John B. Kilbride ◽  
Shawn Fraver ◽  
John A. Kershaw ◽  
...  

AbstractLight detection and ranging (LiDAR) has become a commonly-used tool for generating remotely-sensed forest inventories. However, LiDAR-derived forest inventories have remained uncommon at a regional scale due to varying parameters between LiDAR datasets, such as pulse density. Here we develop a regional model using a three-dimensional convolutional neural network (CNN), a form of deep learning capable of scanning a LiDAR point cloud as well as coincident satellite data, identifying features useful for predicting forest attributes, and then making a series of predictions. We compare this to the standard modeling approach for making forest predictions from LiDAR data, and find that the CNN outperformed the standard approach by a large margin in many cases. We then apply our model to publicly available data over New England, generating maps of fourteen forest attributes at a 10 m resolution over 85 % of the region. Our estimates of attributes that quantified tree size were most successful. In assessing aboveground biomass for example, we achieved a root mean square error of 36 Mg/ha (44 %). Our county-level mapped estimates of biomass were in good agreement with federal estimates. Estimates of attributes quantifying stem density and percent conifer were moderately successful, with a tendency to underestimate of extreme values and banding in low density LiDAR acquisitions. Estimate of attributes quantifying detailed species groupings were less successful. Ultimately we believe these maps will be useful to forest managers, wildlife ecologists, and climate modelers in the region.


2020 ◽  
Vol 9 (5) ◽  
pp. 293 ◽  
Author(s):  
Wouter B. Verschoof-van der Vaart ◽  
Karsten Lambers ◽  
Wojtek Kowalczyk ◽  
Quentin P.J. Bourgeois

This paper presents WODAN2.0, a workflow using Deep Learning for the automated detection of multiple archaeological object classes in LiDAR data from the Netherlands. WODAN2.0 is developed to rapidly and systematically map archaeology in large and complex datasets. To investigate its practical value, a large, random test dataset—next to a small, non-random dataset—was developed, which better represents the real-world situation of scarce archaeological objects in different types of complex terrain. To reduce the number of false positives caused by specific regions in the research area, a novel approach has been developed and implemented called Location-Based Ranking. Experiments show that WODAN2.0 has a performance of circa 70% for barrows and Celtic fields on the small, non-random testing dataset, while the performance on the large, random testing dataset is lower: circa 50% for barrows, circa 46% for Celtic fields, and circa 18% for charcoal kilns. The results show that the introduction of Location-Based Ranking and bagging leads to an improvement in performance varying between 17% and 35%. However, WODAN2.0 does not reach or exceed general human performance, when compared to the results of a citizen science project conducted in the same research area.


2020 ◽  
Vol 12 (3) ◽  
pp. 547 ◽  
Author(s):  
Aaron E. Maxwell ◽  
Pariya Pourmohammadi ◽  
Joey D. Poyner

Modern elevation-determining remote sensing technologies such as light-detection and ranging (LiDAR) produce a wealth of topographic information that is increasingly being used in a wide range of disciplines, including archaeology and geomorphology. However, automated methods for mapping topographic features have remained a significant challenge. Deep learning (DL) mask regional-convolutional neural networks (Mask R-CNN), which provides context-based instance mapping, offers the potential to overcome many of the difficulties of previous approaches to topographic mapping. We therefore explore the application of Mask R-CNN to extract valley fill faces (VFFs), which are a product of mountaintop removal (MTR) coal mining in the Appalachian region of the eastern United States. LiDAR-derived slopeshades are provided as the only predictor variable in the model. Model generalization is evaluated by mapping multiple study sites outside the training data region. A range of assessment methods, including precision, recall, and F1 score, all based on VFF counts, as well as area- and a fuzzy area-based user’s and producer’s accuracy, indicate that the model was successful in mapping VFFs in new geographic regions, using elevation data derived from different LiDAR sensors. Precision, recall, and F1-score values were above 0.85 using VFF counts while user’s and producer’s accuracy were above 0.75 and 0.85 when using the area- and fuzzy area-based methods, respectively, when averaged across all study areas characterized with LiDAR data. Due to the limited availability of LiDAR data until relatively recently, we also assessed how well the model generalizes to terrain data created using photogrammetric methods that characterize past terrain conditions. Unfortunately, the model was not sufficiently general to allow successful mapping of VFFs using photogrammetrically-derived slopeshades, as all assessment metrics were lower than 0.60; however, this may partially be attributed to the quality of the photogrammetric data. The overall results suggest that the combination of Mask R-CNN and LiDAR has great potential for mapping anthropogenic and natural landscape features. To realize this vision, however, research on the mapping of other topographic features is needed, as well as the development of large topographic training datasets including a variety of features for calibrating and testing new methods.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2017 ◽  
Vol 14 (9) ◽  
pp. 1513-1517 ◽  
Author(s):  
Rodrigo F. Berriel ◽  
Andre Teixeira Lopes ◽  
Alberto F. de Souza ◽  
Thiago Oliveira-Santos
Keyword(s):  

Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ling-Ping Cen ◽  
Jie Ji ◽  
Jian-Wei Lin ◽  
Si-Tong Ju ◽  
Hong-Jie Lin ◽  
...  

AbstractRetinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.


Sign in / Sign up

Export Citation Format

Share Document