scholarly journals Stochastic dynamics of snow avalanche occurrence by superposition of Poisson processes

Author(s):  
Paolo Perona ◽  
Edoardo Daly ◽  
Benoît Crouzy ◽  
Amilcare Porporato

We study the dynamics of systems with deterministic trajectories randomly forced by instantaneous discontinuous jumps occurring according to two different compound Poisson processes. One process, with constant frequency, causes instantaneous positive random increments, whereas the second process has a state-dependent frequency and describes negative jumps that force the system to restart from zero (renewal jumps). We obtain the probability distributions of the state variable and the magnitude and intertimes of the jumps to zero. This modelling framework is used to describe snow-depth dynamics on mountain hillsides, where the positive jumps represent snowfall events, whereas the jumps to zero describe avalanches. The probability distributions of snow depth, together with the statistics of avalanche magnitude and occurrence, are used to explain the correlation between avalanche occurrence and snowfall as a function of hydrologic, terrain slope and aspect parameters. This information is synthesized into a ‘prediction entropy’ function that gives the level of confidence of avalanche occurrence prediction in relation to terrain properties.

Author(s):  
A. Lenardic ◽  
J. Seales

The term habitable is used to describe planets that can harbour life. Debate exists as to specific conditions that allow for habitability but the use of the term as a planetary variable has become ubiquitous. This paper poses a meta-level question: What type of variable is habitability? Is it akin to temperature, in that it is something that characterizes a planet, or is something that flows through a planet, akin to heat? That is, is habitability a state or a process variable? Forth coming observations can be used to discriminate between these end-member hypotheses. Each has different implications for the factors that lead to differences between planets (e.g. the differences between Earth and Venus). Observational tests can proceed independent of any new modelling of planetary habitability. However, the viability of habitability as a process can influence future modelling. We discuss a specific modelling framework based on anticipating observations that can discriminate between different views of habitability.


Author(s):  
Marius Ötting ◽  
Roland Langrock ◽  
Antonello Maruotti

AbstractWe investigate the potential occurrence of change points—commonly referred to as “momentum shifts”—in the dynamics of football matches. For that purpose, we model minute-by-minute in-game statistics of Bundesliga matches using hidden Markov models (HMMs). To allow for within-state dependence of the variables, we formulate multivariate state-dependent distributions using copulas. For the Bundesliga data considered, we find that the fitted HMMs comprise states which can be interpreted as a team showing different levels of control over a match. Our modelling framework enables inference related to causes of momentum shifts and team tactics, which is of much interest to managers, bookmakers, and sports fans.


2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


2019 ◽  
Vol 16 (157) ◽  
pp. 20190162 ◽  
Author(s):  
Roland J. Baddeley ◽  
Nigel R. Franks ◽  
Edmund R. Hunt

At a macroscopic level, part of the ant colony life cycle is simple: a colony collects resources; these resources are converted into more ants, and these ants in turn collect more resources. Because more ants collect more resources, this is a multiplicative process, and the expected logarithm of the amount of resources determines how successful the colony will be in the long run. Over 60 years ago, Kelly showed, using information theoretic techniques, that the rate of growth of resources for such a situation is optimized by a strategy of betting in proportion to the probability of pay-off. Thus, in the case of ants, the fraction of the colony foraging at a given location should be proportional to the probability that resources will be found there, a result widely applied in the mathematics of gambling. This theoretical optimum leads to predictions as to which collective ant movement strategies might have evolved. Here, we show how colony-level optimal foraging behaviour can be achieved by mapping movement to Markov chain Monte Carlo (MCMC) methods, specifically Hamiltonian Monte Carlo (HMC). This can be done by the ants following a (noisy) local measurement of the (logarithm of) resource probability gradient (possibly supplemented with momentum, i.e. a propensity to move in the same direction). This maps the problem of foraging (via the information theory of gambling, stochastic dynamics and techniques employed within Bayesian statistics to efficiently sample from probability distributions) to simple models of ant foraging behaviour. This identification has broad applicability, facilitates the application of information theory approaches to understand movement ecology and unifies insights from existing biomechanical, cognitive, random and optimality movement paradigms. At the cost of requiring ants to obtain (noisy) resource gradient information, we show that this model is both efficient and matches a number of characteristics of real ant exploration.


1978 ◽  
Vol 10 (2) ◽  
pp. 431-451 ◽  
Author(s):  
Bo Bergman

Replacement policies based on measurements of some increasing state variable, e.g. wear, accumulated damage or accumulated stress, are studied in this paper. It is assumed that the state measurements may be regarded as realizations of some stochastic process and that the proneness to failure of an active unit may be described by an increasing state-dependent failure rate function. Average long-run cost per unit time is considered. The optimal replacement rule is shown to be a control limit rule, i.e. it is optimal to replace either at failure or when the state variable has reached some threshold value, whichever occurs first. The optimal rule is determined. Some generalizations and special cases are given.


1982 ◽  
Vol 14 (03) ◽  
pp. 654-671 ◽  
Author(s):  
T. C. Brown ◽  
P. K. Pollett

We consider single-class Markovian queueing networks with state-dependent service rates (the immigration processes of Whittle (1968)). The distance of customer flows from Poisson processes is estimated in both the open and closed cases. The bounds on distances lead to simple criteria for good Poisson approximations. Using the bounds, we give an asymptotic, closed network version of the ‘loop criterion' of Melamed (1979) for an open network. Approximation of two or more flows by independent Poisson processes is also studied.


Sign in / Sign up

Export Citation Format

Share Document