unit variance
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 7)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 58 (4) ◽  
pp. 1114-1130
Author(s):  
Martin Singull ◽  
Denise Uwamariya ◽  
Xiangfeng Yang

AbstractLet $\mathbf{X}$ be a $p\times n$ random matrix whose entries are independent and identically distributed real random variables with zero mean and unit variance. We study the limiting behaviors of the 2-normal condition number k(p,n) of $\mathbf{X}$ in terms of large deviations for large n, with p being fixed or $p=p(n)\rightarrow\infty$ with $p(n)=o(n)$ . We propose two main ingredients: (i) to relate the large-deviation probabilities of k(p,n) to those involving n independent and identically distributed random variables, which enables us to consider a quite general distribution of the entries (namely the sub-Gaussian distribution), and (ii) to control, for standard normal entries, the upper tail of k(p,n) using the upper tails of ratios of two independent $\chi^2$ random variables, which enables us to establish an application in statistical inference.


2021 ◽  
Vol 507 (4) ◽  
pp. 4852-4863
Author(s):  
Íñigo Zubeldia ◽  
Aditya Rotti ◽  
Jens Chluba ◽  
Richard Battye

Abstract Matched filters are routinely used in cosmology in order to detect galaxy clusters from mm observations through their thermal Sunyaev–Zeldovich (tSZ) signature. In addition, they naturally provide an observable, the detection signal-to-noise or significance, which can be used as a mass proxy in number counts analyses of tSZ-selected cluster samples. In this work, we show that this observable is, in general, non-Gaussian, and that it suffers from a positive bias, which we refer to as optimization bias. Both aspects arise from the fact that the signal-to-noise is constructed through an optimization operation on noisy data, and hold even if the cluster signal is modelled perfectly well, no foregrounds are present, and the noise is Gaussian. After reviewing the general mathematical formalism underlying matched filters, we study the statistics of the signal-to-noise with a set Monte Carlo mock observations, finding it to be well-described by a unit-variance Gaussian for signal-to-noise values of 6 and above, and quantify the magnitude of the optimization bias, for which we give an approximate expression that may be used in practice. We also consider the impact of the bias on the cluster number counts of Planck and the Simons Observatory (SO), finding it to be negligible for the former and potentially significant for the latter.


Foods ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 435
Author(s):  
Yaoyao Zhou ◽  
Seok-Young Kim ◽  
Jae-Soung Lee ◽  
Byeung-Kon Shin ◽  
Jeong-Ah Seo ◽  
...  

With the increase in soybean trade between countries, the intentional mislabeling of the origin of soybeans has become a serious problem worldwide. In this study, metabolic profiling of soybeans from the Republic of Korea and China was performed by nuclear magnetic resonance (NMR) spectroscopy coupled with multivariate statistical analysis to predict the geographical origin of soybeans. The optimal orthogonal partial least squares-discriminant analysis (OPLS-DA) model was obtained using total area normalization and unit variance (UV) scaling, without applying the variable influences on projection (VIP) cut-off value, resulting in 96.9% sensitivity, 94.4% specificity, and 95.6% accuracy in the leave-one-out cross validation (LOO-CV) test for discriminating between Korean and Chinese soybeans. Soybeans from the northeastern, middle, and southern regions of China were successfully differentiated by standardized area normalization and UV scaling with a VIP cut-off value of 1.0, resulting in 100% sensitivity, 91.7%–100% specificity, and 94.4%–100% accuracy in a LOO-CV test. The methods employed in this study can be used to obtain essential information for the authentication of soybean samples from diverse geographical locations in future studies.


2020 ◽  
Vol 496 (1) ◽  
pp. 328-338
Author(s):  
Adam Moss

ABSTRACT We present a novel Bayesian inference tool that uses a neural network (NN) to parametrize efficient Markov Chain Monte Carlo (MCMC) proposals. The target distribution is first transformed into a diagonal, unit variance Gaussian by a series of non-linear, invertible, and non-volume preserving flows. NNs are extremely expressive, and can transform complex targets to a simple latent representation. Efficient proposals can then be made in this space, and we demonstrate a high degree of mixing on several challenging distributions. Parameter space can naturally be split into a block diagonal speed hierarchy, allowing for fast exploration of subspaces where it is inexpensive to evaluate the likelihood. Using this method, we develop a nested MCMC sampler to perform Bayesian inference and model comparison, finding excellent performance on highly curved and multimodal analytic likelihoods. We also test it on Planck 2015 data, showing accurate parameter constraints, and calculate the evidence for simple one-parameter extensions to the standard cosmological model in ∼20D parameter space. Our method has wide applicability to a range of problems in astronomy and cosmology and is available for download from https://github.com/adammoss/nnest.


2020 ◽  
Vol 57 (1) ◽  
pp. 78-96
Author(s):  
Michael Falk ◽  
Amir Khorrami Chokami ◽  
Simone A. Padoan

AbstractFor a zero-mean, unit-variance stationary univariate Gaussian process we derive the probability that a record at the time n, say $X_n$ , takes place, and derive its distribution function. We study the joint distribution of the arrival time process of records and the distribution of the increments between records. We compute the expected number of records. We also consider two consecutive and non-consecutive records, one at time j and one at time n, and we derive the probability that the joint records $(X_j,X_n)$ occur, as well as their distribution function. The probability that the records $X_n$ and $(X_j,X_n)$ take place and the arrival time of the nth record are independent of the marginal distribution function, provided that it is continuous. These results actually hold for a strictly stationary process with Gaussian copulas.


2019 ◽  
Vol 22 (07) ◽  
pp. 1950059
Author(s):  
Hendrik Flasche ◽  
Zakhar Kabluchko

Let [Formula: see text] be i.i.d. random variables with zero mean and unit variance. Consider a random Taylor series of the form [Formula: see text] where [Formula: see text] is a real sequence such that [Formula: see text] is regularly varying with index [Formula: see text], where [Formula: see text]. We prove that [Formula: see text] where [Formula: see text] denotes the number of real zeroes of [Formula: see text] in the interval [Formula: see text].


2019 ◽  
Vol 484 (3) ◽  
pp. 265-268
Author(s):  
F. Götze ◽  
A. A. Naumov ◽  
A. N. Tikhomirov

We consider symmetric random matrices with independent mean zero and unit variance entries in the upper triangular part. Assuming that the distributions of matrix entries have finite moment of order four, we prove optimal bounds for the distance between the Stieltjes transforms of the empirical spectral distribution function and the semicircle law. Application concerning the convergence rate in probability of the empirical spectral distribution to the semicircle law is discussed as well.


2017 ◽  
Vol 06 (03) ◽  
pp. 1750012 ◽  
Author(s):  
Nicholas Cook

We consider random [Formula: see text] matrices of the form [Formula: see text], where [Formula: see text] is the adjacency matrix of a uniform random [Formula: see text]-regular directed graph on [Formula: see text] vertices, with [Formula: see text] for some fixed [Formula: see text], and [Formula: see text] is an [Formula: see text] matrix of i.i.d. centered random variables with unit variance and finite [Formula: see text]th moment (here ∘ denotes the matrix Hadamard product). We show that as [Formula: see text], the empirical spectral distribution of [Formula: see text] converges weakly in probability to the normalized Lebesgue measure on the unit disk.


2017 ◽  
Vol 30 (3) ◽  
pp. 417-427
Author(s):  
Nikola Simic ◽  
Zoran Peric ◽  
Milan Savic

This paper describes an algorithm for grayscale image compression based on non-uniform quantizers designed for discrete input samples. Non-uniform quantization is performed in two steps for unit variance, whereas design is done by introducing a discrete variance. The best theoretical and experimental results are obtained for those discrete values of variance which provide the operating range of quantizer located in the vicinity of maximal signal value that can appear on the entrance. The experiment is performed by applying proposed quantizers for compression of standard test grayscale images as a classic example of discrete input source. The proposed fixed non-uniform quantizers, designed for discrete input samples, provide up to 4.93 [dB] higher PSQNR compared to the fixed piecewise uniform quantizers designed for discrete input samples.


Sign in / Sign up

Export Citation Format

Share Document