A random forest model to assess snow instability from simulated snow stratigraphy

Author(s):  
Stephanie Mayer ◽  
Alec van Herwijnen ◽  
Jürg Schweizer

<p>Numerical snow cover models enable simulating present or future snow stratigraphy based on meteorological input data from automatic weather stations, numerical weather prediction or climate models. To assess avalanche danger for short-term forecasts or with respect to long-term trends induced by a warming climate, modeled snow stratigraphy has to be interpreted in terms of mechanical instability. Several instability metrics describing the mechanical processes of avalanche release have been implemented into the detailed snow cover model SNOWPACK. However, there exists no readily available method that combines these metrics to predict snow instability.</p><p>To overcome this issue, we compared a comprehensive dataset of almost 600 manual snow profiles with SNOWPACK simulations. The manual profiles were observed in the region of Davos over 17 different winter seasons and include a Rutschblock stability test as well as a local assessment of avalanche danger. To simulate snow stratigraphy at the locations of the manual profiles, we interpolated meteorological input data from a network of automatic weather stations. For each simulated profile, we manually determined the layer corresponding to the weakest layer indicated by the Rutschblock test in the corresponding observed snow profile. We then used the subgroups of the most unstable and the most stable profiles to train a random forest (RF) classification model on the observed stability described by a binary target variable (unstable vs. stable).</p><p>As potential explanatory variables, we considered all implemented stability indices calculated for the manually picked weak layers in the simulated profiles as well as further weak layer and slab properties (e.g. weak layer grain size or slab density).  After selecting the six most decisive features and tuning the hyper-parameters of the RF, the model was able to distinguish between unstable and stable profiles with a five-fold cross-validated accuracy of 88%.</p><p>Our RF model provides the probability of instability (POI) for any simulated snow layer given the features of this layer and the overlying slab. Applying the RF model to each layer of a complete snow profile thus enables the detection of the most unstable layers by considering the local maxima of the POI among all layers of the profile. To analyze the evolution of snow instability over a complete winter season, the RF model can provide the daily maximal POI values for a time series of snow profiles. By comparing this series of POI values with observed avalanche activity, the RF model can be validated.</p><p>The resulting statistical model is an important step towards exploiting numerical snow cover models for snow instability assessment.</p>

2020 ◽  
Author(s):  
Stephanie Mayer ◽  
Alec van Herwijnen ◽  
Mathias Bavay ◽  
Bettina Richter ◽  
Jürg Schweizer

<p>Numerical snow cover models enable simulating present or future snow stratigraphy based on meteorological input data from automatic weather stations, numerical weather prediction or climate models. To assess avalanche danger for short-term forecasts or with respect to long-term trends induced by a warming climate, the modeled vertical layering of the snowpack has to be interpreted in terms of mechanical instability. In recent years, improvements in our understanding of dry-snow slab avalanche formation have led to the introduction of new metrics describing the fracture processes leading to avalanche release. Even though these instability metrics have been implemented into the detailed snow cover model SNOWPACK, validated threshold values that discriminate rather stable from rather unstable snow conditions are not readily available. To overcome this issue, we compared a comprehensive dataset of almost 600 manual snow profiles with simulations. The manual profiles were observed in the region of Davos over 17 different winters and include stability tests such as the Rutschblock test as well as observations of signs of instability. To simulate snow stratigraphy at the locations of the manual profiles, we obtained meteorological input data by interpolating measurements from a network of automatic weather stations. By matching simulated snow layers with the layers from traditional snow profiles, we established a method to detect potential weak layers in the simulated profiles and determine the degree of instability. To this end, thresholds for failure initiation (skier stability index) and crack propagation criteria (critical crack length) were calibrated using the observed stability test results and signs of instability incorporated in the manual observations. The resulting instability criteria are an important step towards exploiting numerical snow cover models for snow instability assessment.</p>


2021 ◽  
Author(s):  
Benjamin Reuter ◽  
Léo Viallon-Galinier ◽  
Stephanie Mayer ◽  
Pascal Hagenmuller ◽  
Samuel Morin

<p>Snow cover models have mostly been developed to support avalanche forecasting. Recently developed snow instability metrics can help interpreting modeled snow cover data. However, presently snow cover models cannot forecast the relevant avalanche problem types – an essential element to describe avalanche danger. We present an approach to detect, track and assess weak layers in snow cover model output data to eventually assess the related avalanche problem type. We demonstrate the applicability of this approach with both, SNOWPACK and CROCUS snow cover model output for one winter season at Weissfluhjoch. We introduced a classification scheme for four commonly used avalanche problem types including new snow, wind slabs, persistent weak layers and wet snow, so different avalanche situations during a winter season can be classified based on weak layer type and meteorological conditions. According to the modeled avalanche problem types and snow instability metrics both models produced weaknesses in the modeled stratigraphy during similar periods. For instance, in late December 2014 the models picked up a non-persistent as well as a persistent weak layer that were both observed in the field and caused widespread instability in the area. Times when avalanches released naturally were recorded with two seismic avalanche detection systems, and coincided reasonably well with periods of low modeled stability. Moreover, the presented approach provides the avalanche problem types that relate to the observed natural instability which makes the interpretation of modeled snow instability metrics easier. As the presented approach is process-based, it is applicable to any model in any snow avalanche climate. It could be used to anticipate changes in avalanche problem type due to changing climate. Moreover, the presented approach is suited to support the interpretation of snow stratigraphy data for operational forecasting.</p>


2019 ◽  
Vol 13 (12) ◽  
pp. 3353-3366 ◽  
Author(s):  
Bettina Richter ◽  
Jürg Schweizer ◽  
Mathias W. Rotach ◽  
Alec van Herwijnen

Abstract. Observed snow stratigraphy and snow stability are of key importance for avalanche forecasting. Such observations are rare and snow cover models can improve the spatial and temporal resolution. To evaluate snow stability, failure initiation and crack propagation have to be considered. Recently, a new stability criterion relating to crack propagation, namely the critical crack length, was implemented into the snow cover model SNOWPACK. The critical crack length can also be measured in the field with a propagation saw test, which allows for an unambiguous comparison. To validate and improve the parameterization for the critical crack length, we used data from 3 years of field experiments performed close to two automatic weather stations above Davos, Switzerland. We monitored seven distinct weak layers and performed in total 157 propagation saw tests on a weekly basis. Comparing modeled to measured critical crack length showed some discrepancies stemming from model assumption. Hence, we replaced two variables of the original parameterization, namely the weak layer shear modulus and thickness, with a fit factor depending on weak layer density and grain size. With these adjustments, the normalized root-mean-square error between modeled and observed critical crack lengths decreased from 1.80 to 0.28. As the improved parameterization accounts for grain size, values of critical crack lengths for snow layers consisting of small grains, which in general are not weak layers, become larger. In turn, critical weak layers appear more prominently in the vertical profile of critical crack length simulated with SNOWPACK. Hence, minimal values in modeled critical crack length better match observed weak layers. The improved parameterization of critical crack length may be useful for both weak layer detection in simulated snow stratigraphy and also providing more realistic snow stability information – and hence may improve avalanche forecasting.


2020 ◽  
Vol 59 (12) ◽  
pp. 2001-2019
Author(s):  
Niilo Siljamo ◽  
Otto Hyvärinen ◽  
Aku Riihelä ◽  
Markku Suomalainen

AbstractSnow cover plays a significant role in the weather and climate system by affecting the energy and mass transfer between the surface and the atmosphere. It also has far-reaching effects on ecosystems of snow-covered areas. Therefore, global snow-cover observations in a timely manner are needed. Satellite-based instruments can be utilized to produce snow-cover information that is suitable for these needs. Highly variable surface and snow-cover features suggest that operational snow extent algorithms may benefit from at least a partly empirical approach that is based on carefully analyzed training data. Here, a new two-phase snow-cover algorithm utilizing data from the Advanced Very High Resolution Radiometer (AVHRR) on board the MetOp satellites of the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) is introduced and evaluated. This algorithm is used to produce the MetOp/AVHRR H32 snow extent product for the Satellite Application Facility on Support to Operational Hydrology and Water Management (H SAF). The algorithm aims at direct detection of snow-covered and snow-free pixels without preceding cloud masking. Pixels that cannot be classified reliably to snow or snow-free, because of clouds or other reasons, are set as unclassified. This reduces the coverage but increases the accuracy of the algorithm. More than four years of snow-depth and state-of-the-ground observations from weather stations were used to validate the product. Validation results show that the algorithm produces high-quality snow coverage data that may be suitable for numerical weather prediction, hydrological modeling, and other applications.


2009 ◽  
Vol 55 (193) ◽  
pp. 761-768 ◽  
Author(s):  
Michael Schirmer ◽  
Michael Lehning ◽  
Jürg Schweizer

AbstractIn the past, numerical prediction of regional avalanche danger using statistical methods with meteorological input variables has shown insufficiently accurate results, possibly due to the lack of snowstratigraphy data. Detailed snow-cover data were rarely used because they were not readily available (manual observations). With the development and increasing use of snow-cover models this deficiency can now be rectified and model output can be used as input for forecasting models. We used the output of the physically based snow-cover model SNOWPACK combined with meteorological variables to investigate and establish a link to regional avalanche danger. Snow stratigraphy was simulated for the location of an automatic weather station near Davos, Switzerland, over nine winters. Only dry-snow situations were considered. A variety of selection algorithms was used to identify the most important simulated snow variables. Data mining and statistical methods, including classification trees, artificial neural networks, support vector machines, hidden Markov models and nearest-neighbour methods were trained on the forecasted regional avalanche danger (European avalanche danger scale). The best results were achieved with a nearest-neighbour method which used the avalanche danger level of the previous day as additional input. A cross-validated accuracy (hit rate) of 73% was obtained. This study suggests that modelled snow-stratigraphy variables, as provided by SNOWPACK, are able to improve numerical avalanche forecasting.


1957 ◽  
Vol 3 (21) ◽  
pp. 72-77
Author(s):  
Miloš Vrba ◽  
Bedřich Urbánek

AbstractThis paper gives a brief account of the results so far obtained in research in Czechoslovakia on the crystallographic, stratigraphical and thermal properties of snow cover, and the use of these data in avalanche investigations. Avalanche danger is predicted by comparing the penetration resistance of snow layers, measured with a rammsonde, with resistance graphs of typical avalanche situations.


Energies ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 1809
Author(s):  
Mohammed El Amine Senoussaoui ◽  
Mostefa Brahami ◽  
Issouf Fofana

Machine learning is widely used as a panacea in many engineering applications including the condition assessment of power transformers. Most statistics attribute the main cause of transformer failure to insulation degradation. Thus, a new, simple, and effective machine-learning approach was proposed to monitor the condition of transformer oils based on some aging indicators. The proposed approach was used to compare the performance of two machine-learning classifiers: J48 decision tree and random forest. The service-aged transformer oils were classified into four groups: the oils that can be maintained in service, the oils that should be reconditioned or filtered, the oils that should be reclaimed, and the oils that must be discarded. From the two algorithms, random forest exhibited a better performance and high accuracy with only a small amount of data. Good performance was achieved through not only the application of the proposed algorithm but also the approach of data preprocessing. Before feeding the classification model, the available data were transformed using the simple k-means method. Subsequently, the obtained data were filtered through correlation-based feature selection (CFsSubset). The resulting features were again retransformed by conducting the principal component analysis and were passed through the CFsSubset filter. The transformation and filtration of the data improved the classification performance of the adopted algorithms, especially random forest. Another advantage of the proposed method is the decrease in the number of the datasets required for the condition assessment of transformer oils, which is valuable for transformer condition monitoring.


Agriculture ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 371
Author(s):  
Yu Jin ◽  
Jiawei Guo ◽  
Huichun Ye ◽  
Jinling Zhao ◽  
Wenjiang Huang ◽  
...  

The remote sensing extraction of large areas of arecanut (Areca catechu L.) planting plays an important role in investigating the distribution of arecanut planting area and the subsequent adjustment and optimization of regional planting structures. Satellite imagery has previously been used to investigate and monitor the agricultural and forestry vegetation in Hainan. However, the monitoring accuracy is affected by the cloudy and rainy climate of this region, as well as the high level of land fragmentation. In this paper, we used PlanetScope imagery at a 3 m spatial resolution over the Hainan arecanut planting area to investigate the high-precision extraction of the arecanut planting distribution based on feature space optimization. First, spectral and textural feature variables were selected to form the initial feature space, followed by the implementation of the random forest algorithm to optimize the feature space. Arecanut planting area extraction models based on the support vector machine (SVM), BP neural network (BPNN), and random forest (RF) classification algorithms were then constructed. The overall classification accuracies of the SVM, BPNN, and RF models optimized by the RF features were determined as 74.82%, 83.67%, and 88.30%, with Kappa coefficients of 0.680, 0.795, and 0.853, respectively. The RF model with optimized features exhibited the highest overall classification accuracy and kappa coefficient. The overall accuracy of the SVM, BPNN, and RF models following feature optimization was improved by 3.90%, 7.77%, and 7.45%, respectively, compared with the corresponding unoptimized classification model. The kappa coefficient also improved. The results demonstrate the ability of PlanetScope satellite imagery to extract the planting distribution of arecanut. Furthermore, the RF is proven to effectively optimize the initial feature space, composed of spectral and textural feature variables, further improving the extraction accuracy of the arecanut planting distribution. This work can act as a theoretical and technical reference for the agricultural and forestry industries.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3553
Author(s):  
Jeremy Watts ◽  
Anahita Khojandi ◽  
Rama Vasudevan ◽  
Fatta B. Nahab ◽  
Ritesh A. Ramdhani

Parkinson’s disease medication treatment planning is generally based on subjective data obtained through clinical, physician-patient interactions. The Personal KinetiGraph™ (PKG) and similar wearable sensors have shown promise in enabling objective, continuous remote health monitoring for Parkinson’s patients. In this proof-of-concept study, we propose to use objective sensor data from the PKG and apply machine learning to cluster patients based on levodopa regimens and response. The resulting clusters are then used to enhance treatment planning by providing improved initial treatment estimates to supplement a physician’s initial assessment. We apply k-means clustering to a dataset of within-subject Parkinson’s medication changes—clinically assessed by the MDS-Unified Parkinson’s Disease Rating Scale-III (MDS-UPDRS-III) and the PKG sensor for movement staging. A random forest classification model was then used to predict patients’ cluster allocation based on their respective demographic information, MDS-UPDRS-III scores, and PKG time-series data. Clinically relevant clusters were partitioned by levodopa dose, medication administration frequency, and total levodopa equivalent daily dose—with the PKG providing similar symptomatic assessments to physician MDS-UPDRS-III scores. A random forest classifier trained on demographic information, MDS-UPDRS-III scores, and PKG time-series data was able to accurately classify subjects of the two most demographically similar clusters with an accuracy of 86.9%, an F1 score of 90.7%, and an AUC of 0.871. A model that relied solely on demographic information and PKG time-series data provided the next best performance with an accuracy of 83.8%, an F1 score of 88.5%, and an AUC of 0.831, hence further enabling fully remote assessments. These computational methods demonstrate the feasibility of using sensor-based data to cluster patients based on their medication responses with further potential to assist with medication recommendations.


2021 ◽  
Author(s):  
Dieter Issler

<p>On physical grounds, the rate of bed entrainment in gravity mass flows should be determined by the properties of the bed material and the dynamical variables of the flow. Due to the complexity of the process, most entrainment formulas proposed in the literature contain some ad-hoc parameter not tied to measurable snow properties. Among the very few models without free parameters are the Eglit–Grigorian–Yakimov (EGY) model of frontal entrainment from the 1960s and two formulas for basal entrainment, one from the 1970s due to Grigorian and Ostroumov (GO) and one (IJ) implemented in NGI’s flow code MoT-Voellmy. A common feature of these three approaches is their treating erosion as a shock and exploiting jump conditions for mass and momentum across the erosion front. The erosion or entrainment rate is determined by the difference between the avalanche-generated stress at the erosion front and the strength of the snow cover. The models differ with regard to how the shock is oriented and which momentum components are considered. The present contribution shows that each of the three models has some shortcomings: The EGY model is ambiguous if the avalanche pressure is too small to entrain the entire snow layer, the IJ model neglects normal stresses, and the GO model disregards shear stresses and acceleration of the eroded mass. As they stand, neither the GO nor the IJ model capture situations―observed experimentally by means of profiling radar―in which the snow cover is not eroded progressively but suddenly fails on a buried weak layer as the avalanche flows over it. We suggest a way to resolve the ambiguity in the EGY model and sketch a more comprehensive model combining all three approaches to capture gradual entrainment from the snow-cover surface together with erosion along a buried weak layer.</p>


Sign in / Sign up

Export Citation Format

Share Document