scholarly journals Automated snow avalanche release area delineation in data sparse, remote, and forested regions

2021 ◽  
Author(s):  
John Sykes ◽  
Pascal Haegeli ◽  
Yves Bühler

Abstract. Potential avalanche release area (PRA) modelling is critical for generating automated avalanche terrain maps which provide low-cost large scale spatial representations of snow avalanche hazard for both infrastructure planning and recreational applications. Current methods are not applicable in mountainous terrain where high-resolution elevation models are unavailable and do not include an efficient method to account for avalanche release in forested terrain. This research focuses on expanding an existing PRA model to better incorporate forested terrain using satellite imagery and presents a novel approach for validating the model using local expertise, thereby broadening its application to numerous mountain ranges worldwide. The study area of this research is a remote portion of the Columbia Mountains in southeastern British Columbia, Canada which has no pre-existing high-resolution spatial data sets. Our research documents an open source workflow to generate high-resolution DEM and forest land cover data sets using optical satellite data processing. We validate the PRA model by collecting a polygon dataset of observed potential release areas from local guides, using a method which accounts for the uncertainty of human recollection and variability of avalanche release. The validation dataset allows us to perform a quantitative analysis of the PRA model accuracy and optimize the PRA model input parameters to the snowpack and terrain characteristics of our study area. Compared to the original PRA model our implementation of forested terrain and local optimization improved the percentage of validation polygons accurately modelled by 11.7 percentage points and reduced the number of validation polygons that were underestimated by 14.8 percentage points. Our methods demonstrate substantial improvement in the performance of the PRA model in forested terrain and provide means to generate the requisite input datasets and validation data to apply and evaluate the PRA model in vastly more mountainous regions worldwide than was previously possible.

2014 ◽  
Vol 11 (6) ◽  
pp. 6139-6166 ◽  
Author(s):  
T. R. Marthews ◽  
S. J. Dadson ◽  
B. Lehner ◽  
S. Abele ◽  
N. Gedney

Abstract. Modelling land surface water flow is of critical importance for simulating land-surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL, and the most important parameter of this model is the well-knowntopographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically-conditioned HydroSHEDS data sets using the GA2 algorithm. At 15 arcsec resolution, these layers are 4× finer than the resolution of the previously best-available topographic index layers, the Compound Topographic Index of HYDRO1k (CTI). In terms of the largest river catchments occurring on each continent, we found that in comparison to our revised values, CTI values were up to 20% higher in e.g. the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. We believe these new index layers represent the most robust existing global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.


Author(s):  
A. Nakamura

To facilitate disease-modifying clinical trials for Alzheimer’s Disease (AD), a blood-based amyloid-β (Aβ) biomarker, which can accurately detect an early pathological signature of AD at prodromal or preclinical stages, has been strongly desired, because it is simpler, less invasive and less costly compared to PET or lumbar puncture. Despite plasma Aβ biomarkers having been extensively investigated, most studies failed to demonstrate clinical utility (1, 2), and at the end of 2016, there was a rather pessimistic mood that this objective might be impossible to realize (3). However, since the latter half of 2017, the situation appears to have changed dramatically, in that several groups have reported potential clinical utility of plasma Aβ biomarkers using different methodologies (4-7). Especially, immunoprecipitation followed by mass spectrometry (IP-MS) assays have shown promising converging evidence. In 2014, we, the National Center for Geriatrics and Gerontology (NCGG) and Koichi Tanaka Mass Spectrometry Research Laboratory at Shimadzu Corporation (Shimadzu), reported that the plasma ratio of Aβ1-42 to a novel APP669-711 fragment (APP669–711/Aβ 1–42) as determined by IP-MS could discriminate high Aβ (Aβ+) individuals from low Aβ (Aβ-) individuals (classified using PiB-PET) with more than 90% accuracy (n=62) (8). In 2017, the Washington University group analyzed detailed kinetics of plasma Aβs, and reported that Aβ42/Aβ40 as measured by IP-MS could distinguish Aβ+ and Aβ- individuals with 88.7% areas under the curve value (n=41) (5). Then very recently, we, in collaboration with the Australian Imaging, Biomarker and Lifestyle Study of Aging (AIBL), have demonstrated that plasma biomarkers, APP669-711/Aβ1-42, Aβ1-40/Aβ1-42, and their composites (composite biomarker), as generated by improved IP-MS methodology performs very well in larger independent datasets: a discovery dataset (NCGG, n=121) and a validation dataset (AIBL, n=252 which includes n=111 PiB-PET and 141 with other ligands) both of which included individuals with normal cognition, MCI and AD. Particularly, the composite biomarker showed very high AUCs in both datasets (discovery 96.7%, n=121, and validation 94.1%, n=111) with accuracy c.a. 90% when using PiB-PET as standard of truth. The findings of the study were considered to be robust, reproducible and reliable because biomarker performance was validated in a blinded manner using independent data sets (Japan and Australia) and involved an established large-scale multicenter cohort (AIBL).


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


2016 ◽  
Vol 9 (6) ◽  
pp. 1187-1213 ◽  
Author(s):  
Petra Schneidhofer ◽  
Erich Nau ◽  
Alois Hinterleitner ◽  
Agata Lugmayr ◽  
Jan Bill ◽  
...  

2019 ◽  
Vol 11 (7) ◽  
pp. 755 ◽  
Author(s):  
Xiaodong Zhang ◽  
Kun Zhu ◽  
Guanzhou Chen ◽  
Xiaoliang Tan ◽  
Lifei Zhang ◽  
...  

Object detection on very-high-resolution (VHR) remote sensing imagery has attracted a lot of attention in the field of image automatic interpretation. Region-based convolutional neural networks (CNNs) have been vastly promoted in this domain, which first generate candidate regions and then accurately classify and locate the objects existing in these regions. However, the overlarge images, the complex image backgrounds and the uneven size and quantity distribution of training samples make the detection tasks more challenging, especially for small and dense objects. To solve these problems, an effective region-based VHR remote sensing imagery object detection framework named Double Multi-scale Feature Pyramid Network (DM-FPN) was proposed in this paper, which utilizes inherent multi-scale pyramidal features and combines the strong-semantic, low-resolution features and the weak-semantic, high-resolution features simultaneously. DM-FPN consists of a multi-scale region proposal network and a multi-scale object detection network, these two modules share convolutional layers and can be trained end-to-end. We proposed several multi-scale training strategies to increase the diversity of training data and overcome the size restrictions of the input images. We also proposed multi-scale inference and adaptive categorical non-maximum suppression (ACNMS) strategies to promote detection performance, especially for small and dense objects. Extensive experiments and comprehensive evaluations on large-scale DOTA dataset demonstrate the effectiveness of the proposed framework, which achieves mean average precision (mAP) value of 0.7927 on validation dataset and the best mAP value of 0.793 on testing dataset.


2018 ◽  
Vol 11 (7) ◽  
pp. 4153-4170
Author(s):  
Fanny Jeanneret ◽  
Giovanni Martucci ◽  
Simon Pinnock ◽  
Alexis Berne

Abstract. The validation of long-term cloud data sets retrieved from satellites is challenging due to their worldwide coverage going back as far as the 1980s. A trustworthy reference cannot be found easily at every location and every time. Mountainous regions present a particular problem since ground-based measurements are sparse. Moreover, as retrievals from passive satellite radiometers are difficult in winter due to the presence of snow on the ground, it is particularly important to develop new ways to evaluate and to correct satellite data sets over elevated areas. In winter for ground levels above 1000 m (a.s.l.) in Switzerland, the cloud occurrence of the newly released cloud property data sets of the ESA Climate Change Initiative Cloud_cci Project (Advanced Very High Resolution Radiometer afternoon series (AVHRR-PM) and Moderate-Resolution Imaging Spectroradiometer (MODIS) Aqua series) is 132 to 217 % that of surface synoptic (SYNOP) observations, corresponding to a rate of false cloud detections between 24 and 54 %. Furthermore, the overestimations increase with the altitude of the sites and are associated with particular retrieved cloud properties. In this study, a novel post-processing approach is proposed to reduce the amount of false cloud detections in the satellite data sets. A combination of ground-based downwelling longwave and shortwave radiation and temperature measurements is used to provide independent validation of the cloud cover over 41 locations in Switzerland. An agreement of 85 % is obtained when the cloud cover is compared to surface synoptic observations (90 % within ± 1 okta difference). The validation data are then co-located with the satellite observations, and a decision tree model is trained to automatically detect the overestimations in the satellite cloud masks. Cross-validated results show that 62±13 % of these overestimations can be identified by the model, reducing the systematic error in the satellite data sets from 14.4±15.5 % to 4.3±2.8 %. The amount of errors is lower, and, importantly, their distribution is more homogeneous as well. These corrections happen at the cost of a global increase of 7±2 % of missed clouds. Using this model, it is possible to significantly improve the cloud detection reliability in elevated areas in the Cloud_cci AVHRR-PM and MODIS-Aqua products.


2008 ◽  
Vol 08 (02) ◽  
pp. 243-263 ◽  
Author(s):  
BENJAMIN A. AHLBORN ◽  
OLIVER KREYLOS ◽  
SOHAIL SHAFII ◽  
BERND HAMANN ◽  
OLIVER G. STAADT

We introduce a system that adds a foveal inset to large-scale projection displays. The effective resolution of the foveal inset projection is higher than the original display resolution, allowing the user to see more details and finer features in large data sets. The foveal inset is generated by projecting a high-resolution image onto a mirror mounted on a panCtilt unit that is controlled by the user with a laser pointer. Our implementation is based on Chromium and supports many OpenGL applications without modifications.We present experimental results using high-resolution image data from medical imaging and aerial photography.


2021 ◽  
Vol 13 (4) ◽  
pp. 692
Author(s):  
Yuwei Jin ◽  
Wenbo Xu ◽  
Ce Zhang ◽  
Xin Luo ◽  
Haitao Jia

Convolutional Neural Networks (CNNs), such as U-Net, have shown competitive performance in the automatic extraction of buildings from Very High-Resolution (VHR) aerial images. However, due to the unstable multi-scale context aggregation, the insufficient combination of multi-level features and the lack of consideration of the semantic boundary, most existing CNNs produce incomplete segmentation for large-scale buildings and result in predictions with huge uncertainty at building boundaries. This paper presents a novel network with a special boundary-aware loss embedded, called the Boundary-Aware Refined Network (BARNet), to address the gap above. The unique properties of the proposed BARNet are the gated-attention refined fusion unit, the denser atrous spatial pyramid pooling module, and the boundary-aware loss. The performance of the BARNet is tested on two popular data sets that include various urban scenes and diverse patterns of buildings. Experimental results demonstrate that the proposed method outperforms several state-of-the-art approaches in both visual interpretation and quantitative evaluations.


2013 ◽  
Vol 10 (10) ◽  
pp. 12793-12827 ◽  
Author(s):  
W. Gossel ◽  
R. Laehne

Abstract. Time series analysis methods are compared based on four geoscientific datasets. New methods such as wavelet analysis, STFT and period scanning bridge the gap between high resolution analysis of periodicities and non-equidistant data sets. The sample studies include not only time series but also spatial data. The application of variograms as an addition to or instead of autocorrelation opens new research possibilities for storage parameters.


Sign in / Sign up

Export Citation Format

Share Document