Estimating the reduced moments of a random measure

1999 ◽  
Vol 31 (01) ◽  
pp. 48-62 ◽  
Author(s):  
Kiên Kiêu ◽  
Marianne Mora

We consider a random measure for which distribution is invariant under the action of a standard transformation group. The reduced moments are defined by applying classical theorems on invariant measure decomposition. We present a general method for constructing unbiased estimators of reduced moments. Several asymptotic results are established under an extension of the Brillinger mixing condition. Examples related to stochastic geometry are given.

1999 ◽  
Vol 31 (1) ◽  
pp. 48-62 ◽  
Author(s):  
Kiên Kiêu ◽  
Marianne Mora

We consider a random measure for which distribution is invariant under the action of a standard transformation group. The reduced moments are defined by applying classical theorems on invariant measure decomposition. We present a general method for constructing unbiased estimators of reduced moments. Several asymptotic results are established under an extension of the Brillinger mixing condition. Examples related to stochastic geometry are given.


1996 ◽  
Vol 28 (02) ◽  
pp. 335-336
Author(s):  
Kiên Kiêu ◽  
Marianne Mora

Random measures are commonly used to describe geometrical properties of random sets. Examples are given by the counting measure associated with a point process, and the curvature measures associated with a random set with a smooth boundary. We consider a random measure with an invariant distribution under the action of a standard transformation group (translatioris, rigid motions, translations along a given direction and so on). In the framework of the theory of invariant measure decomposition, the reduced moments of the random measure are obtained by decomposing the related moment measures.


1996 ◽  
Vol 28 (2) ◽  
pp. 335-336 ◽  
Author(s):  
Kiên Kiêu ◽  
Marianne Mora

Random measures are commonly used to describe geometrical properties of random sets. Examples are given by the counting measure associated with a point process, and the curvature measures associated with a random set with a smooth boundary. We consider a random measure with an invariant distribution under the action of a standard transformation group (translatioris, rigid motions, translations along a given direction and so on). In the framework of the theory of invariant measure decomposition, the reduced moments of the random measure are obtained by decomposing the related moment measures.


1995 ◽  
Vol 32 (1) ◽  
pp. 105-122 ◽  
Author(s):  
Masakiyo Miyazawa

Mecke's formula is concerned with a stationary random measure and shift-invariant measure on a locally compact Abelian group , and relates integrations concerning them to each other. For , we generalize this to a pair of random measures which are jointly stationary. The resulting formula extends the so-called Swiss Army formula, which was recently obtained as a generalization for Little's formula. The generalized Mecke formula, which is called GMF, can be also viewed as a generalization of the stationary version of H = λG. Under the stationary and ergodic assumptions, we apply it to derive many sample path formulas which have been known as extensions of H = λG. This will make clear what kinds of probabilistic conditions are sufficient to get them. We also mention a further generalization of Mecke's formula.


1995 ◽  
Vol 32 (01) ◽  
pp. 105-122
Author(s):  
Masakiyo Miyazawa

Mecke's formula is concerned with a stationary random measure and shift-invariant measure on a locally compact Abelian group, and relates integrations concerning them to each other. For, we generalize this to a pair of random measures which are jointly stationary. The resulting formula extends the so-called Swiss Army formula, which was recently obtained as a generalization for Little's formula. The generalized Mecke formula, which is called GMF, can be also viewed as a generalization of the stationary version ofH = λG. Under the stationary and ergodic assumptions, we apply it to derive many sample path formulas which have been known as extensions ofH = λG.This will make clear what kinds of probabilistic conditions are sufficient to get them. We also mention a further generalization of Mecke's formula.


Author(s):  
André Mas ◽  
Besnik Pumo

This article provides an overview of the basic theory and applications of linear processes for functional data, with particular emphasis on results published from 2000 to 2008. It first considers centered processes with values in a Hilbert space of functions before proposing some statistical models that mimic or adapt the scalar or finite-dimensional approaches for time series. It then discusses general linear processes, focusing on the invertibility and convergence of the estimated moments and a general method for proving asymptotic results for linear processes. It also describes autoregressive processes as well as two issues related to the general estimation problem, namely: identifiability and the inverse problem. Finally, it examines convergence results for the autocorrelation operator and the predictor, extensions for the autoregressive Hilbertian (ARH) model, and some numerical aspects of prediction when the data are curves observed at discrete points.


Author(s):  
Marius Kroll

AbstractWe give two asymptotic results for the empirical distance covariance on separable metric spaces without any iid assumption on the samples. In particular, we show the almost sure convergence of the empirical distance covariance for any measure with finite first moments, provided that the samples form a strictly stationary and ergodic process. We further give a result concerning the asymptotic distribution of the empirical distance covariance under the assumption of absolute regularity of the samples and extend these results to certain types of pseudometric spaces. In the process, we derive a general theorem concerning the asymptotic distribution of degenerate V-statistics of order 2 under a strong mixing condition.


1990 ◽  
Vol 4 (4) ◽  
pp. 493-521 ◽  
Author(s):  
Albert G. Greenberg ◽  
Robert J. Vanderbei

Gauss-Seidel is a general method for solving a system of equations (possibly nonlinear). It makes repeated sweeps through the variables; within a sweep as each new estimate for a variable is computed, the current estimate for that variable is replaced with the new estimate immediately, instead of on completion of the sweep. The idea is to use new data as soon as it is computed. Gauss- Seidel is often efficient for computing the invariant measure of a Markov chain (especially if the transition matrix is sparse), and for computing the value function in optimal control problems. In many applications the computation can be significantly improved by appropriately ordering the variables within each sweep. A simple heuristic is presented here for computing an ordering that quickens convergence. In parallel processing, several variables must be computed simultaneously, which appears to work against Gauss-Seidel. Simple asynchronous parallel Gauss-Seidel methods are presented here. Experiments indicate that the methods retain the benefit of a good ordering, while further speeding up convergence by a factor of P if P processors participate.In this paper, we focus on the optimal stopping problem. A probabilistic interpretation of the Gauss-Seidel (and the Jacobi) method for computing the value function is given, which motivates our ordering heuristic. However, the ordering heuristic and parallel processing methods apply in a broader context, in particular, to the important problem of computing the invariant measure of a Markov chain.


Author(s):  
J. R. Fields

The energy analysis of electrons scattered by a specimen in a scanning transmission electron microscope can improve contrast as well as aid in chemical identification. In so far as energy analysis is useful, one would like to be able to design a spectrometer which is tailored to his particular needs. In our own case, we require a spectrometer which will accept a parallel incident beam and which will focus the electrons in both the median and perpendicular planes. In addition, since we intend to follow the spectrometer by a detector array rather than a single energy selecting slit, we need as great a dispersion as possible. Therefore, we would like to follow our spectrometer by a magnifying lens. Consequently, the line along which electrons of varying energy are dispersed must be normal to the direction of the central ray at the spectrometer exit.


Sign in / Sign up

Export Citation Format

Share Document