scholarly journals Novel Techniques for Void Filling in Glacier Elevation Change Data Sets

2020 ◽  
Vol 12 (23) ◽  
pp. 3917
Author(s):  
Thorsten Seehaus ◽  
Veniamin I. Morgenshtern ◽  
Fabian Hübner ◽  
Eberhard Bänsch ◽  
Matthias H. Braun

The increasing availability of digital elevation models (DEMs) facilitates the monitoring of glacier mass balances on local and regional scales. Geodetic glacier mass balances are obtained by differentiating DEMs. However, these computations are usually affected by voids in the derived elevation change data sets. Different approaches, using spatial statistics or interpolation techniques, were developed to account for these voids in glacier mass balance estimations. In this study, we apply novel void filling techniques, which are typically used for the reconstruction and retouche of images and photos, for the first time on elevation change maps. We selected 6210 km2 of glacier area in southeast Alaska, USA, covered by two void-free DEMs as the study site to test different inpainting methods. Different artificially voided setups were generated using manually defined voids and a correlation mask based on stereoscopic processing of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) acquisition. Three “novel” (Telea, Navier–Stokes and shearlet) as well as three “classical” (bilinear interpolation, local and global hypsometric methods) void filling approaches for glacier elevation data sets were implemented and evaluated. The hypsometric approaches showed, in general, the worst performance, leading to high average and local offsets. Telea and Navier–Stokes void filling showed an overall stable and reasonable quality. The best results are obtained for shearlet and bilinear void filling, if certain criteria are met. Considering also computational costs and feasibility, we recommend using the bilinear void filling method in glacier volume change analyses. Moreover, we propose and validate a formula to estimate the uncertainties caused by void filling in glacier volume change computations. The formula is transferable to other study sites, where no ground truth data on the void areas exist, and leads to higher accuracy of the error estimates on void-filled areas. In the spirit of reproducible research, we publish a software repository with the implementation of the novel void filling algorithms and the code reproducing the statistical analysis of the data, along with the data sets themselves.

Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.


Author(s):  
L. Girod ◽  
C. Nuth ◽  
A. Kääb

The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a source of stereoscopic images covering the whole globe at a 15m resolution at a consistent quality for over 15 years. The potential of this data in terms of geomorphological analysis and change detection in three dimensions is unrivaled and needs to be exploited. However, the quality of the DEMs and ortho-images currently delivered by NASA (ASTER DMO products) is often of insufficient quality for a number of applications such as mountain glacier mass balance. For this study, the use of Ground Control Points (GCPs) or of other ground truth was rejected due to the global “big data” type of processing that we hope to perform on the ASTER archive. We have therefore developed a tool to compute Rational Polynomial Coefficient (RPC) models from the ASTER metadata and a method improving the quality of the matching by identifying and correcting jitter induced cross-track parallax errors. Our method outputs more accurate DEMs with less unmatched areas and reduced overall noise. The algorithms were implemented in the open source photogrammetric library and software suite MicMac.


Author(s):  
N. Soyama ◽  
K. Muramatsu ◽  
M. Daigo ◽  
F. Ochiai ◽  
N. Fujiwara

Validating the accuracy of land cover products using a reliable reference dataset is an important task. A reliable reference dataset is produced with information derived from ground truth data. Recently, the amount of ground truth data derived from information collected by volunteers has been increasing globally. The acquisition of volunteer-based reference data demonstrates great potential. However information given by volunteers is limited useful vegetation information to produce a complete reference dataset based on the plant functional type (PFT) with five specialized forest classes. In this study, we examined the availability and applicability of FLUXNET information to produce reference data with higher levels of reliability. FLUXNET information was useful especially for forest classes for interpretation in comparison with the reference dataset using information given by volunteers.


2020 ◽  
Vol 12 (1) ◽  
pp. 9-12
Author(s):  
Arjun G. Koppad ◽  
Syeda Sarfin ◽  
Anup Kumar Das

The study has been conducted for land use and land cover classification by using SAR data. The study included examining of ALOS 2 PALSAR L- band quad pol (HH, HV, VH and VV) SAR data for LULC classification. The SAR data was pre-processed first which included multilook, radiometric calibration, geometric correction, speckle filtering, SAR Polarimetry and decomposition. For land use land cover classification of ALOS-2-PALSAR data sets, the supervised Random forest classifier was used. Training samples were selected with the help of ground truth data. The area was classified under 7 different classes such as dense forest, moderate dense forest, scrub/sparse forest, plantation, agriculture, water body, and settlements. Among them the highest area was covered by dense forest (108647ha) followed by horticulture plantation (57822 ha) and scrub/Sparse forest (49238 ha) and lowest area was covered by moderate dense forest (11589 ha).   Accuracy assessment was performed after classification. The overall accuracy of SAR data was 80.36% and Kappa Coefficient was 0.76.  Based on SAR backscatter reflectance such as single, double, and volumetric scattering mechanism different land use classes were identified.


2016 ◽  
Author(s):  
Roshni Cooper ◽  
Shaul Yogev ◽  
Kang Shen ◽  
Mark Horowitz

AbstractMotivation:Microtubules (MTs) are polarized polymers that are critical for cell structure and axonal transport. They form a bundle in neurons, but beyond that, their organization is relatively unstudied.Results:We present MTQuant, a method for quantifying MT organization using light microscopy, which distills three parameters from MT images: the spacing of MT minus-ends, their average length, and the average number of MTs in a cross-section of the bundle. This method allows for robust and rapid in vivo analysis of MTs, rendering it more practical and more widely applicable than commonly-used electron microscopy reconstructions. MTQuant was successfully validated with three ground truth data sets and applied to over 3000 images of MTs in a C. elegans motor neuron.Availability:MATLAB code is available at http://roscoope.github.io/MTQuantContact:[email protected] informationSupplementary data are available at Bioinformatics online.


2019 ◽  
Vol 11 (2) ◽  
pp. 187 ◽  
Author(s):  
Julian Podgórski ◽  
Christophe Kinnard ◽  
Michał Pętlicki ◽  
Roberto Urrutia

TanDEM-X digital elevation model (DEM) is a global DEM released by the German Aerospace Center (DLR) at outstanding resolution of 12 m. However, the procedure for its creation involves the combination of several DEMs from acquisitions spread between 2011 and 2014, which casts doubt on its value for precise glaciological change detection studies. In this work we present TanDEM-X DEM as a high-quality product ready for use in glaciological studies. We compare it to Aerial Laser Scanning (ALS)-based dataset from April 2013 (1 m), used as the ground-truth reference, and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) V003 DEM and SRTM v3 DEM (both 30 m), serving as representations of past glacier states. We use a method of sub-pixel coregistration of DEMs by Nuth and Kääb (2011) to determine the geometric accuracy of the products. In addition, we propose a slope-aspect heatmap-based workflow to remove the errors resulting from radar shadowing over steep terrain. Elevation difference maps obtained by subtraction of DEMs are analyzed to obtain accuracy assessments and glacier mass balance reconstructions. The vertical accuracy (± standard deviation) of TanDEM-X DEM over non-glacierized area is very good at 0.02 ± 3.48 m. Nevertheless, steep areas introduce large errors and their filtering is required for reliable results. The 30 m version of TanDEM-X DEM performs worse than the finer product, but its accuracy, −0.08 ± 7.57 m, is better than that of SRTM and ASTER. The ASTER DEM contains errors, possibly resulting from imperfect DEM creation from stereopairs over uniform ice surface. Universidad Glacier has been losing mass at a rate of −0.44 ± 0.08 m of water equivalent per year between 2000 and 2013. This value is in general agreement with previously reported mass balance estimated with the glaciological method for 2012–2014.


2019 ◽  
Vol 13 (1) ◽  
pp. 120-126
Author(s):  
K. Bhavanishankar ◽  
M. V. Sudhamani

Objective: Lung cancer is proving to be one of the deadliest diseases that is haunting mankind in recent years. Timely detection of the lung nodules would surely enhance the survival rate. This paper focusses on the classification of candidate lung nodules into nodules/non-nodules in a CT scan of the patient. A deep learning approach –autoencoder is used for the classification. Investigation/Methodology: Candidate lung nodule patches obtained as the results of the lung segmentation are considered as input to the autoencoder model. The ground truth data from the LIDC repository is prepared and is submitted to the autoencoder training module. After a series of experiments, it is decided to use 4-stacked autoencoder. The model is trained for over 600 LIDC cases and the trained module is tested for remaining data sets. Results: The results of the classification are evaluated with respect to performance measures such as sensitivity, specificity, and accuracy. The results obtained are also compared with other related works and the proposed approach was found to be better by 6.2% with respect to accuracy. Conclusion: In this paper, a deep learning approach –autoencoder has been used for the classification of candidate lung nodules into nodules/non-nodules. The performance of the proposed approach was evaluated with respect to sensitivity, specificity, and accuracy and the obtained values are 82.6%, 91.3%, and 87.0%, respectively. This result is then compared with existing related works and an improvement of 6.2% with respect to accuracy has been observed.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 288-306 ◽  
Author(s):  
Guan Yuan ◽  
Zhaohui Wang ◽  
Fanrong Meng ◽  
Qiuyan Yan ◽  
Shixiong Xia

Purpose Currently, ubiquitous smartphones embedded with various sensors provide a convenient way to collect raw sequence data. These data bridges the gap between human activity and multiple sensors. Human activity recognition has been widely used in quite a lot of aspects in our daily life, such as medical security, personal safety, living assistance and so on. Design/methodology/approach To provide an overview, the authors survey and summarize some important technologies and involved key issues of human activity recognition, including activity categorization, feature engineering as well as typical algorithms presented in recent years. In this paper, the authors first introduce the character of embedded sensors and dsiscuss their features, as well as survey some data labeling strategies to get ground truth label. Then, following the process of human activity recognition, the authors discuss the methods and techniques of raw data preprocessing and feature extraction, and summarize some popular algorithms used in model training and activity recognizing. Third, they introduce some interesting application scenarios of human activity recognition and provide some available data sets as ground truth data to validate proposed algorithms. Findings The authors summarize their viewpoints on human activity recognition, discuss the main challenges and point out some potential research directions. Originality/value It is hoped that this work will serve as the steppingstone for those interested in advancing human activity recognition.


Sign in / Sign up

Export Citation Format

Share Document