scholarly journals Quantum Discrepancy: A Non-Commutative Version of Combinatorial Discrepancy

10.37236/7587 ◽  
2020 ◽  
Vol 27 (2) ◽  
Author(s):  
Kasra Alishahi ◽  
Mohaddeseh Rajaee ◽  
Ali Rajaei

In this paper, we introduce a notion of quantum discrepancy, a non-commutative version of combinatorial discrepancy which is defined for projection systems, i.e. finite sets of orthogonal projections, as non-commutative counterparts of set systems. We show that besides its natural algebraic formulation, quantum discrepancy, when restricted to set systems, has a probabilistic interpretation in terms of determinantal processes. Determinantal processes are a family of point processes with a rich algebraic structure.  A common feature of this family is the local repulsive behavior of points. Alishahi and Zamani (2015) exploit this repelling property to construct low-discrepancy point configurations on the sphere.  We give an upper bound for quantum discrepancy in terms of $N$, the dimension of the space, and $M$, the size of the projection system, which is tight in a wide range of parameters $N$ and $M$. Then we investigate the relation of these two kinds of discrepancies, i.e. combinatorial and quantum, when restricted to set systems, and bound them in terms of each other.

1980 ◽  
Vol 12 (3) ◽  
pp. 727-745 ◽  
Author(s):  
D. P. Gaver ◽  
P. A. W. Lewis

It is shown that there is an innovation process {∊n} such that the sequence of random variables {Xn} generated by the linear, additive first-order autoregressive scheme Xn = pXn-1 + ∊n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1. This first-order autoregressive gamma sequence is useful for modelling a wide range of observed phenomena. Properties of sums of random variables from this process are studied, as well as Laplace-Stieltjes transforms of adjacent variables and joint moments of variables with different separations. The process is not time-reversible and has a zero-defect which makes parameter estimation straightforward. Other positive-valued variables generated by the first-order autoregressive scheme are studied, as well as extensions of the scheme for generating sequences with given marginal distributions and negative serial correlations.


1980 ◽  
Vol 12 (03) ◽  
pp. 727-745 ◽  
Author(s):  
D. P. Gaver ◽  
P. A. W. Lewis

It is shown that there is an innovation process {∊ n } such that the sequence of random variables {X n } generated by the linear, additive first-order autoregressive scheme X n = pXn-1 + ∊ n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1. This first-order autoregressive gamma sequence is useful for modelling a wide range of observed phenomena. Properties of sums of random variables from this process are studied, as well as Laplace-Stieltjes transforms of adjacent variables and joint moments of variables with different separations. The process is not time-reversible and has a zero-defect which makes parameter estimation straightforward. Other positive-valued variables generated by the first-order autoregressive scheme are studied, as well as extensions of the scheme for generating sequences with given marginal distributions and negative serial correlations.


2015 ◽  
Vol 8 (10) ◽  
pp. 4155-4170 ◽  
Author(s):  
L. Klüser ◽  
N. Killius ◽  
G. Gesell

Abstract. The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.


Author(s):  
Christopher K. Wikle

The climate system consists of interactions between physical, biological, chemical, and human processes across a wide range of spatial and temporal scales. Characterizing the behavior of components of this system is crucial for scientists and decision makers. There is substantial uncertainty associated with observations of this system as well as our understanding of various system components and their interaction. Thus, inference and prediction in climate science should accommodate uncertainty in order to facilitate the decision-making process. Statistical science is designed to provide the tools to perform inference and prediction in the presence of uncertainty. In particular, the field of spatial statistics considers inference and prediction for uncertain processes that exhibit dependence in space and/or time. Traditionally, this is done descriptively through the characterization of the first two moments of the process, one expressing the mean structure and one accounting for dependence through covariability.Historically, there are three primary areas of methodological development in spatial statistics: geostatistics, which considers processes that vary continuously over space; areal or lattice processes, which considers processes that are defined on a countable discrete domain (e.g., political units); and, spatial point patterns (or point processes), which consider the locations of events in space to be a random process. All of these methods have been used in the climate sciences, but the most prominent has been the geostatistical methodology. This methodology was simultaneously discovered in geology and in meteorology and provides a way to do optimal prediction (interpolation) in space and can facilitate parameter inference for spatial data. These methods rely strongly on Gaussian process theory, which is increasingly of interest in machine learning. These methods are common in the spatial statistics literature, but much development is still being done in the area to accommodate more complex processes and “big data” applications. Newer approaches are based on restricting models to neighbor-based representations or reformulating the random spatial process in terms of a basis expansion. There are many computational and flexibility advantages to these approaches, depending on the specific implementation. Complexity is also increasingly being accommodated through the use of the hierarchical modeling paradigm, which provides a probabilistically consistent way to decompose the data, process, and parameters corresponding to the spatial or spatio-temporal process.Perhaps the biggest challenge in modern applications of spatial and spatio-temporal statistics is to develop methods that are flexible yet can account for the complex dependencies between and across processes, account for uncertainty in all aspects of the problem, and still be computationally tractable. These are daunting challenges, yet it is a very active area of research, and new solutions are constantly being developed. New methods are also being rapidly developed in the machine learning community, and these methods are increasingly more applicable to dependent processes. The interaction and cross-fertilization between the machine learning and spatial statistics community is growing, which will likely lead to a new generation of spatial statistical methods that are applicable to climate science.


1999 ◽  
Vol 31 (2) ◽  
pp. 279-282 ◽  
Author(s):  
Y. C. Chin ◽  
A. J. Baddeley

We note some interesting properties of the class of point processes which are Markov with respect to the ‘connected component’ relation. Results in the literature imply that this class is closed under random translation and independent cluster generation with almost surely non-empty clusters. We further prove that it is closed under superposition. A wide range of examples is also given.


2013 ◽  
Vol 110 (11) ◽  
pp. 2592-2606 ◽  
Author(s):  
Renato N. Watanabe ◽  
Fernando H. Magalhães ◽  
Leonardo A. Elias ◽  
Vitor M. Chaud ◽  
Emanuele M. Mello ◽  
...  

This study focuses on neuromuscular mechanisms behind ankle torque and EMG variability during a maintained isometric plantar flexion contraction. Experimentally obtained torque standard deviation (SD) and soleus, medial gastrocnemius, and lateral gastrocnemius EMG envelope mean and SD increased with mean torque for a wide range of torque levels. Computer simulations were performed on a biophysically-based neuromuscular model of the triceps surae consisting of premotoneuronal spike trains (the global input, GI) driving the motoneuron pools of the soleus, medial gastrocnemius, and lateral gastrocnemius muscles, which activate their respective muscle units. Two types of point processes were adopted to represent the statistics of the GI: Poisson and Gamma. Simulations showed a better agreement with experimental results when the GI was modeled by Gamma point processes having lower orders (higher variability) for higher target torques. At the same time, the simulations reproduced well the experimental data of EMG envelope mean and SD as a function of mean plantar flexion torque, for the three muscles. These results suggest that the experimentally found relations between torque-EMG variability as a function of mean plantar flexion torque level depend not only on the intrinsic properties of the motoneuron pools and the muscle units innervated, but also on the increasing variability of the premotoneuronal GI spike trains when their mean rates increase to command a higher plantar flexion torque level. The simulations also provided information on spike train statistics of several hundred motoneurons that compose the triceps surae, providing a wide picture of the associated mechanisms behind torque and EMG variability.


2013 ◽  
Vol 50 (4) ◽  
pp. 1006-1024 ◽  
Author(s):  
Feng Chen ◽  
Peter Hall

Self-exciting point processes (SEPPs), or Hawkes processes, have found applications in a wide range of fields, such as epidemiology, seismology, neuroscience, engineering, and more recently financial econometrics and social interactions. In the traditional SEPP models, the baseline intensity is assumed to be a constant. This has restricted the application of SEPPs to situations where there is clearly a self-exciting phenomenon, but a constant baseline intensity is inappropriate. In this paper, to model point processes with varying baseline intensity, we introduce SEPP models with time-varying background intensities (SEPPVB, for short). We show that SEPPVB models are competitive with autoregressive conditional SEPP models (Engle and Russell 1998) for modeling ultra-high frequency data. We also develop asymptotic theory for maximum likelihood estimation based inference of parametric SEPP models, including SEPPVB. We illustrate applications to ultra-high frequency financial data analysis, and we compare performance with the autoregressive conditional duration models.


1999 ◽  
Vol 31 (02) ◽  
pp. 279-282 ◽  
Author(s):  
Y. C. Chin ◽  
A. J. Baddeley

We note some interesting properties of the class of point processes which are Markov with respect to the ‘connected component’ relation. Results in the literature imply that this class is closed under random translation and independent cluster generation with almost surely non-empty clusters. We further prove that it is closed under superposition. A wide range of examples is also given.


2013 ◽  
Vol 50 (04) ◽  
pp. 1006-1024 ◽  
Author(s):  
Feng Chen ◽  
Peter Hall

Self-exciting point processes (SEPPs), or Hawkes processes, have found applications in a wide range of fields, such as epidemiology, seismology, neuroscience, engineering, and more recently financial econometrics and social interactions. In the traditional SEPP models, the baseline intensity is assumed to be a constant. This has restricted the application of SEPPs to situations where there is clearly a self-exciting phenomenon, but a constant baseline intensity is inappropriate. In this paper, to model point processes with varying baseline intensity, we introduce SEPP models with time-varying background intensities (SEPPVB, for short). We show that SEPPVB models are competitive with autoregressive conditional SEPP models (Engle and Russell 1998) for modeling ultra-high frequency data. We also develop asymptotic theory for maximum likelihood estimation based inference of parametric SEPP models, including SEPPVB. We illustrate applications to ultra-high frequency financial data analysis, and we compare performance with the autoregressive conditional duration models.


2006 ◽  
Vol 38 (4) ◽  
pp. 873-888 ◽  
Author(s):  
Peter McCullagh ◽  
Jesper Møller

We extend the boson process first to a large class of Cox processes and second to an even larger class of infinitely divisible point processes. Density and moment results are studied in detail. These results are obtained in closed form as weighted permanents, so the extension is called a permanental process. Temporal extensions and a particularly tractable case of the permanental process are also studied. Extensions of the fermion process along similar lines, leading to so-called determinantal processes, are discussed.


Sign in / Sign up

Export Citation Format

Share Document