Spatial conditional extremes via the Gibbs sampler.

Author(s):  
Adrian Casey ◽  
Ioannis Papastathopoulos

<div> <div> <div> <p>Spatial conditional extremes via the Gibbs sampler.</p> <p>Adrian Casey, University of Edinburgh</p> <p>January 14, 2020</p> <p>Conditional extreme value theory has been successfully applied to spatial extremes problems. In this statistical method, data from observation sites are modelled as appropriate asymptotic characterisations of random vectors <strong>X,</strong> conditioned on one of their components being extreme. The method is generic and applies to a broad range of dependence structures including asymptotic dependence and asymptotic independence. However, one issue that affects the conditional extremes method is the necessity to model and fit a multi-dimensional residual distribution; this can be challenging in spatial problems with a large number of sites.</p> <p>We describe early-stage work that takes a local approach to spatial extremes; this approach explores lower dimensional structures that are based on asymptotic representations of Markov random fields. The main element of this new method is a model for the behaviour of a random component X<sub>i </sub> given that its nearest neighbours exceed a sufficiently large threshold. When combined with a model for the case where the neighbours are below this threshold, a Gibbs sampling scheme serves to induce a model for the full conditional extremes distribution by taking repeated samples from these local (univariate) distributions.</p> <p>The new method is demonstrated on a data set of significant wave heights from the North Sea basin. Markov chain Monte-Carlo diagnostics and goodness-of-fit tests illustrate the performance of the method. The potential for extrapolation into the outer reaches of the conditional extreme tails is then examined.</p> <p>Joint work with Ioannis Papastathopoulos.</p> </div> </div> </div>

2020 ◽  
Author(s):  
Silius Mortensønn Vandeskog ◽  
Sara Martino

<p>Extreme precipitation can lead to great floods and landslides and cause severe damage and economical losses. It is therefore of great importance that we manage to assess the risk of future extremes. Furthermore, natural hazards are spatiotemporal phenomena that require extensive modelling in both space and time. Extreme value theory (EVT) can be used for statistical modelling of spatial extremes, such as extreme precipitation over a catchment. An important concept when modelling a natural hazard is the degree of extremal dependence for the given phenomenon. Extremal dependence describes the possibility of multiple extremes occurring at the same time. For the stochastic variables X and Y, with distribution functions F<sub>X </sub>and F<sub>Y</sub>, the measure</p><p>χ = lim<sub>u</sub><sub>→1 </sub>P(F<sub>X</sub>(X) > u Ι F<sub>Y</sub>(Y) > u)</p><p>describes the pairwise extremal dependence between X and Y. If χ = 0, then the variables are <em>asymptotically independent</em>. If χ > 0, they are<br><em>asymptotically dependent</em>. Thus, extremes tend to occur simultaneously in space for processes that are asymptotically dependent, while this seldom occurs for asymptotically independent processes. It is a general belief that extreme precipitation tends to be asymptotically independent. However, to our knowledge, not much work has been put into analysing the extremal dependence structure of precipitation. Different statistical models have been developed, which can be applied for modelling spatial extremes. The most popular model is the <em>max-stable process</em>. Unfortunately, this model does not provide a good fit to asymptotically independent processes. Other models have been developed for better incorporating asymptotic independence, but most have not been extensively applied yet. We aim to examine the extremal dependence structure of precipitation in Norway, with the ultimate goal of modelling and simulating extreme precipitation. This is achieved by examining multiple popular statistics for extremal dependence, as well as comparing different spatial EVT models. This analysis is performed on hourly, gridded precipitation data from the MetCoOp Ensemble Prediction System (MEPS), which is publicly available from the internet: http://thredds.met.no/thredds/catalog/meps25epsarchive/catalog.html.</p>


1992 ◽  
Vol 26 (9-11) ◽  
pp. 2345-2348 ◽  
Author(s):  
C. N. Haas

A new method for the quantitative analysis of multiple toxicity data is described and illustrated using a data set on metal exposure to copepods. Positive interactions are observed for Ni-Pb and Pb-Cr, with weak negative interactions observed for Ni-Cr.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Author(s):  
Fred L. Bookstein

AbstractA matrix manipulation new to the quantitative study of develomental stability reveals unexpected morphometric patterns in a classic data set of landmark-based calvarial growth. There are implications for evolutionary studies. Among organismal biology’s fundamental postulates is the assumption that most aspects of any higher animal’s growth trajectories are dynamically stable, resilient against the types of small but functionally pertinent transient perturbations that may have originated in genotype, morphogenesis, or ecophenotypy. We need an operationalization of this axiom for landmark data sets arising from longitudinal data designs. The present paper introduces a multivariate approach toward that goal: a method for identification and interpretation of patterns of dynamical stability in longitudinally collected landmark data. The new method is based in an application of eigenanalysis unfamiliar to most organismal biologists: analysis of a covariance matrix of Boas coordinates (Procrustes coordinates without the size standardization) against their changes over time. These eigenanalyses may yield complex eigenvalues and eigenvectors (terms involving $$i=\sqrt{-1}$$ i = - 1 ); the paper carefully explains how these are to be scattered, gridded, and interpreted by their real and imaginary canonical vectors. For the Vilmann neurocranial octagons, the classic morphometric data set used as the running example here, there result new empirical findings that offer a pattern analysis of the ways perturbations of growth are attenuated or otherwise modified over the course of developmental time. The main finding, dominance of a generalized version of dynamical stability (negative autoregressions, as announced by the negative real parts of their eigenvalues, often combined with shearing and rotation in a helpful canonical plane), is surprising in its strength and consistency. A closing discussion explores some implications of this novel pattern analysis of growth regulation. It differs in many respects from the usual way covariance matrices are wielded in geometric morphometrics, differences relevant to a variety of study designs for comparisons of development across species.


2021 ◽  
Vol 503 (2) ◽  
pp. 2688-2705
Author(s):  
C Doux ◽  
E Baxter ◽  
P Lemos ◽  
C Chang ◽  
A Alarcon ◽  
...  

ABSTRACT Beyond ΛCDM, physics or systematic errors may cause subsets of a cosmological data set to appear inconsistent when analysed assuming ΛCDM. We present an application of internal consistency tests to measurements from the Dark Energy Survey Year 1 (DES Y1) joint probes analysis. Our analysis relies on computing the posterior predictive distribution (PPD) for these data under the assumption of ΛCDM. We find that the DES Y1 data have an acceptable goodness of fit to ΛCDM, with a probability of finding a worse fit by random chance of p = 0.046. Using numerical PPD tests, supplemented by graphical checks, we show that most of the data vector appears completely consistent with expectations, although we observe a small tension between large- and small-scale measurements. A small part (roughly 1.5 per cent) of the data vector shows an unusually large departure from expectations; excluding this part of the data has negligible impact on cosmological constraints, but does significantly improve the p-value to 0.10. The methodology developed here will be applied to test the consistency of DES Year 3 joint probes data sets.


2013 ◽  
Vol 321-324 ◽  
pp. 1947-1950
Author(s):  
Lei Gu ◽  
Xian Ling Lu

In the initialization of the traditional k-harmonic means clustering, the initial centers are generated randomly and its number is equal to the number of clusters. Although the k-harmonic means clustering is insensitive to the initial centers, this initialization method cannot improve clustering performance. In this paper, a novel k-harmonic means clustering based on multiple initial centers is proposed. The number of the initial centers is more than the number of clusters in this new method. The new method with multiple initial centers can divide the whole data set into multiple groups and combine these groups into the final solution. Experiments show that the presented algorithm can increase the better clustering accuracies than the traditional k-means and k-harmonic methods.


2008 ◽  
Vol 57 (4) ◽  
pp. 633-642 ◽  
Author(s):  
Chien-Tai Lin ◽  
Yen-Lung Huang ◽  
N. Balakrishnan

2010 ◽  
Vol 2 (2) ◽  
pp. 38-51 ◽  
Author(s):  
Marc Halbrügge

Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) taskThis paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.


2017 ◽  
Author(s):  
Berit Lindum Waltoft ◽  
Asger Hobolth

AbstractThe variability in population size is a key quantity for understanding the evolutionary history of a species. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from the site frequency spectrum. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the variability in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on data from nine different human populations.


2014 ◽  
Vol 2 (1) ◽  
Author(s):  
Anne Dutfoy ◽  
Sylvie Parey ◽  
Nicolas Roche

AbstractIn this paper, we provide a tutorial on multivariate extreme value methods which allows to estimate the risk associated with rare events occurring jointly. We draw particular attention to issues related to extremal dependence and we insist on the asymptotic independence feature. We apply the multivariate extreme value theory on two data sets related to hydrology and meteorology: first, the joint flooding of two rivers, which puts at risk the facilities lying downstream the confluence; then the joint occurrence of high speed wind and low air temperatures, which might affect overhead lines.


Sign in / Sign up

Export Citation Format

Share Document