scholarly journals On the relation between avalanche occurrence and avalanche danger level

2020 ◽  
Vol 14 (2) ◽  
pp. 737-750 ◽  
Author(s):  
Jürg Schweizer ◽  
Christoph Mitterer ◽  
Frank Techel ◽  
Andreas Stoffel ◽  
Benjamin Reuter

Abstract. In many countries with seasonally snow-covered mountain ranges warnings are issued to alert the public about imminent avalanche danger, mostly employing an ordinal, five-level danger scale. However, as avalanche danger cannot be measured, the characterization of avalanche danger remains qualitative. The probability of avalanche occurrence in combination with the expected avalanche type and size decide on the degree of danger in a given forecast region (≳100 km2). To describe avalanche occurrence probability, the snowpack stability and its spatial distribution need to be assessed. To quantify the relation between avalanche occurrence and avalanche danger level, we analyzed a large data set of visually observed avalanches (13 918 in total) from the region of Davos (eastern Swiss Alps, ∼300 km2), all with mapped outlines, and we compared the avalanche activity to the forecast danger level on the day of occurrence (3533 danger ratings). The number of avalanches per day strongly increased with increasing danger level, confirming that not only the release probability but also the frequency of locations with a weakness in the snowpack where avalanches may initiate from increase within a region. Avalanche size did not generally increase with increasing avalanche danger level, suggesting that avalanche size may be of secondary importance compared to snowpack stability and its distribution when assessing the danger level. Moreover, the frequency of wet-snow avalanches was found to be higher than the frequency of dry-snow avalanches for a given day and danger level; also, wet-snow avalanches tended to be larger. This finding may indicate that the danger scale is not used consistently with regard to avalanche type. Even though observed avalanche occurrence and avalanche danger level are subject to uncertainties, our findings on the characteristics of avalanche activity suggest reworking the definitions of the European avalanche danger scale. The description of the danger levels can be improved, in particular by quantifying some of the many proportional quantifiers. For instance, based on our analyses, “many avalanches”, expected at danger level 4-High, means on the order of at least 10 avalanches per 100 km2. Whereas our data set is one of the most comprehensive, visually observed avalanche records are known to be inherently incomplete so that our results often refer to a lower limit and should be confirmed using other similarly comprehensive data sets.

2019 ◽  
Author(s):  
Jürg Schweizer ◽  
Christoph Mitterer ◽  
Frank Techel ◽  
Andreas Stoffel ◽  
Benjamin Reuter

Abstract. In many countries with seasonally snow-covered mountain ranges warnings are issued to alert the public about imminent avalanche danger, mostly employing a 5-level danger scale. However, as avalanche danger cannot be measured, the charac-terization of avalanche danger remains qualitative. The probability of avalanche occurrence in combination with the ex-pected avalanche type and size decide on the degree of danger in a given forecast region (≳ 100 km2). To describe ava-lanche occurrence probability the snowpack stability and its spatial distribution need to be assessed. To quantify the rela-tion between avalanche occurrence and avalanche danger level we analyzed a large data set of visually observed ava-lanches from the region of Davos (Eastern Swiss Alps), all with mapped outlines, and compared the avalanche activity to the forecast danger level on the day of occurrence. The number of avalanches per day strongly increased with increasing danger level confirming that not only the release probability but also the frequency of locations with a weakness in the snowpack where avalanches may initiate from, increases within a region. Avalanche size did in general not increase with increasing avalanche danger level, suggesting that avalanche size may be of secondary importance compared to snowpack stability and its distribution when assessing the danger level. Moreover, the frequency of wet-snow avalanches was found to be higher than the frequency of dry-snow avalanches on a given day; also, wet-snow avalanches tended to be larger. This finding may indicate that the danger scale is not used consistently with regard to avalanche type. Although, observed ava-lanche occurrence and avalanche danger level are subject to uncertainties, our findings on the characteristics of avalanche activity may allow revisiting the definitions of the European avalanche danger scale. The description of the danger levels can be improved, in particular by quantifying some of the many proportional quantifiers. For instance, ‘many avalanches’, expected at danger level 4–High, means on the order of 10 avalanches per 100 km2. Whereas our data set is one of the most comprehensive, visually observed avalanche records are known to be inherently incomplete so that our results often refer to a lower limit and should be confirmed using other similarly comprehensive data sets.


2020 ◽  
Vol 14 (10) ◽  
pp. 3503-3521
Author(s):  
Frank Techel ◽  
Karsten Müller ◽  
Jürg Schweizer

Abstract. Consistency in assigning an avalanche danger level when forecasting or locally assessing avalanche hazard is essential but challenging to achieve, as relevant information is often scarce and must be interpreted in light of uncertainties. Furthermore, the definitions of the danger levels, an ordinal variable, are vague and leave room for interpretation. Decision tools developed to assist in assigning a danger level are primarily experience-based due to a lack of data. Here, we address this lack of quantitative evidence by exploring a large data set of stability tests (N=9310) and avalanche observations (N=39 017) from two countries related to the three key factors that characterize avalanche danger: snowpack stability, the frequency distribution of snowpack stability, and avalanche size. We show that the frequency of the most unstable locations increases with increasing danger level. However, a similarly clear relation between avalanche size and danger level was not found. Only for the higher danger levels did the size of the largest avalanche per day and warning region increase. Furthermore, we derive stability distributions typical for the danger levels 1-Low to 4-High using four stability classes (very poor, poor, fair, and good) and define frequency classes describing the frequency of the most unstable locations (none or nearly none, a few, several, and many). Combining snowpack stability, the frequency of stability classes and avalanche size in a simulation experiment, typical descriptions for the four danger levels are obtained. Finally, using the simulated stability distributions together with the largest avalanche size in a stepwise approach, we present a data-driven look-up table for avalanche danger assessment. Our findings may aid in refining the definitions of the avalanche danger scale and in fostering its consistent usage.


2020 ◽  
Author(s):  
Frank Techel ◽  
Karsten Müller ◽  
Jürg Schweizer

Abstract. Consistency in assigning an avalanche danger level when forecasting or locally assessing avalanche hazard is essential, but challenging to achieve, as relevant information is often scarce and must be interpreted in the light of uncertainties. Furthermore, the definitions of the danger levels, an ordinal variable, are vague and leave room for interpretation. Decision tools, developed to assist in assigning a danger level, are primarily experience-based due to a lack of data. Here, we address this lack of quantitative evidence by exploring a large data set of stability tests (N = 10,125) and avalanche observations (N = 39,017) from two countries related to the three key factors that characterize avalanche danger: snowpack stability, its frequency distribution and avalanche size. We show that the frequency of the most unstable locations increases with increasing danger level. However, a similarly clear relation between avalanche size and danger level was not found. Only for the higher danger levels the size of the largest avalanche per day and warning region increased. Furthermore, we derive stability distributions typical for the danger levels 1-Low to 4-High using four stability classes (very poor, poor, fair and good), and define frequency classes (none or nearly none, a few, several and many) describing the frequency of the most unstable locations. Combining snowpack stability, its frequency and avalanche size in a simulation experiment, typical descriptions for the four danger levels are obtained. Finally, using the simulated snowpack distributions together with the largest avalanche size in a step-wise approach, as proposed in the Conceptual Model of Avalanche Hazard, we present an example for a data-driven look-up table for avalanche danger assessment. Our findings may aid in refining the definitions of the avalanche danger scale and in fostering its consistent usage.


2021 ◽  
Author(s):  
Cristina Pérez-Guillén ◽  
Frank Techel ◽  
Martin Hendrick ◽  
Michele Volpi ◽  
Alec van Herwijnen ◽  
...  

Abstract. Even today, the assessment of avalanche danger is by large a subjective, yet data-based decision-making process. Human experts analyze heterogeneous data volumes, diverse in scale, and conclude on the avalanche scenario based on their experience. Nowadays, modern machine learning methods and the rise in computing power in combination with physical snow cover modelling open up new possibilities for developing decision support tools for operational avalanche forecasting. Therefore, we developed a fully data-driven approach to predict the regional avalanche danger level, the key component in public avalanche forecasts, for dry-snow conditions in the Swiss Alps. Using a large data set of more than 20 years of meteorological data measured by a network of automated weather stations, which are located at the elevation of potential avalanche starting zones, and snow cover simulations driven with these input weather data, we trained two random forest (RF) classifiers. The first classifier (RF #1) was trained relying on the forecast danger levels published in the avalanche bulletin. Given the uncertainty related to a forecast danger level as a target variable, we trained a second classifier (RF #2), relying on a quality-controlled subset of danger level labels. We optimized the RF classifiers by selecting the best set of input features combining meteorological variables and features extracted from the simulated profiles. The accuracy of the danger level predictions ranged between 74 % and 76 % for RF #1, and between 72 % and 78 % for RF #2, with both models achieving better performance than previously developed methods. To assess the accuracy of the forecast, and thus the quality of our labels, we relied on nowcast assessments of avalanche danger by well-trained observers. The performance of both models was similar to the accuracy of the current experience-based Swiss avalanche forecasts (which is estimated to 76 %). The models performed consistently well throughout the Swiss Alps, thus in different climatic regions, albeit with some regional differences. A prototype model with the RF classifiers was already tested in a semi-operational setting by the Swiss avalanche warning service during the winter 2020-2021. The promising results suggest that the model may well have potential to become a valuable, supplementary decision support tool for avalanche forecasters when assessing avalanche hazard.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


2006 ◽  
Vol 39 (2) ◽  
pp. 262-266 ◽  
Author(s):  
R. J. Davies

Synchrotron sources offer high-brilliance X-ray beams which are ideal for spatially and time-resolved studies. Large amounts of wide- and small-angle X-ray scattering data can now be generated rapidly, for example, during routine scanning experiments. Consequently, the analysis of the large data sets produced has become a complex and pressing issue. Even relatively simple analyses become difficult when a single data set can contain many thousands of individual diffraction patterns. This article reports on a new software application for the automated analysis of scattering intensity profiles. It is capable of batch-processing thousands of individual data files without user intervention. Diffraction data can be fitted using a combination of background functions and non-linear peak functions. To compliment the batch-wise operation mode, the software includes several specialist algorithms to ensure that the results obtained are reliable. These include peak-tracking, artefact removal, function elimination and spread-estimate fitting. Furthermore, as well as non-linear fitting, the software can calculate integrated intensities and selected orientation parameters.


1997 ◽  
Vol 1997 ◽  
pp. 143-143
Author(s):  
B.L. Nielsen ◽  
R.F. Veerkamp ◽  
J.E. Pryce ◽  
G. Simm ◽  
J.D. Oldham

High producing dairy cows have been found to be more susceptible to disease (Jones et al., 1994; Göhn et al., 1995) raising concerns about the welfare of the modern dairy cow. Genotype and number of lactations may affect various health problems differently, and their relative importance may vary. The categorical nature and low incidence of health events necessitates large data-sets, but the use of data collected across herds may introduce unwanted variation. Analysis of a comprehensive data-set from a single herd was carried out to investigate the effects of genetic line and lactation number on the incidence of various health and reproductive problems.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3523-3526

This paper describes an efficient algorithm for classification in large data set. While many algorithms exist for classification, they are not suitable for larger contents and different data sets. For working with large data sets various ELM algorithms are available in literature. However the existing algorithms using fixed activation function and it may lead deficiency in working with large data. In this paper, we proposed novel ELM comply with sigmoid activation function. The experimental evaluations demonstrate the our ELM-S algorithm is performing better than ELM,SVM and other state of art algorithms on large data sets.


2021 ◽  
Vol 14 (11) ◽  
pp. 2369-2382
Author(s):  
Monica Chiosa ◽  
Thomas B. Preußer ◽  
Gustavo Alonso

Data analysts often need to characterize a data stream as a first step to its further processing. Some of the initial insights to be gained include, e.g., the cardinality of the data set and its frequency distribution. Such information is typically extracted by using sketch algorithms, now widely employed to process very large data sets in manageable space and in a single pass over the data. Often, analysts need more than one parameter to characterize the stream. However, computing multiple sketches becomes expensive even when using high-end CPUs. Exploiting the increasing adoption of hardware accelerators, this paper proposes SKT , an FPGA-based accelerator that can compute several sketches along with basic statistics (average, max, min, etc.) in a single pass over the data. SKT has been designed to characterize a data set by calculating its cardinality, its second frequency moment, and its frequency distribution. The design processes data streams coming either from PCIe or TCP/IP, and it is built to fit emerging cloud service architectures, such as Microsoft's Catapult or Amazon's AQUA. The paper explores the trade-offs of designing sketch algorithms on a spatial architecture and how to combine several sketch algorithms into a single design. The empirical evaluation shows how SKT on an FPGA offers a significant performance gain over high-end, server-class CPUs.


Sign in / Sign up

Export Citation Format

Share Document