statistical estimator
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Michele Nardin ◽  
Jozsef Csicsvari ◽  
Gašper Tkačik ◽  
Cristina Savin

Although much is known about how single neurons in the hippocampus represent an animal’s position, how cell-cell interactions contribute to spatial coding remains poorly understood. Using a novel statistical estimator and theoretical modeling, both developed in the framework of maximum entropy models, we reveal highly structured cell-to-cell interactions whose statistics depend on familiar vs. novel environment. In both conditions the circuit interactions optimize the encoding of spatial information, but for regimes that differ in the signal-to-noise ratio of their spatial inputs. Moreover, the topology of the interactions facilitates linear decodability, making the information easy to read out by downstream circuits. These findings suggest that the efficient coding hypothesis is not applicable only to individual neuron properties in the sensory periphery, but also to neural interactions in the central brain.


2021 ◽  
Author(s):  
Louis Martí ◽  
Shengyi Wu ◽  
Steven T. Piantadosi ◽  
Celeste Kidd

Many social and legal conflicts come down to differences in semantics. Yet, semantic variation between individuals and people’s awareness of this variation have been relatively neglected by experimental psychology. Here, across two experiments, we quantify the amount of agreement and disagreement between ordinary semantic concepts in thepopulation, as well as people’s meta-cognitive awareness of these differences. We collect similarity ratings and feature judgements, and analyze them using a non-parametricclustering scheme with an ecological statistical estimator to infer the number of different meanings for the same word that is present in the population. We find that typically atleast ten to twenty variants of meanings exist for even common nouns, but that people are unaware of this variation. Instead, people exhibit a strong bias to erroneously believe that other people share their particular semantics, pointing to one factor that likely interfereswith political and social discourse.


2021 ◽  
Vol 3 (2) ◽  
pp. 357-373
Author(s):  
Umberto Michelucci ◽  
Francesca Venturini

Neural networks present characteristics where the results are strongly dependent on the training data, the weight initialisation, and the hyperparameters chosen. The determination of the distribution of a statistical estimator, as the Mean Squared Error (MSE) or the accuracy, is fundamental to evaluate the performance of a neural network model (NNM). For many machine learning models, as linear regression, it is possible to analytically obtain information as variance or confidence intervals on the results. Neural networks present the difficulty of not being analytically tractable due to their complexity. Therefore, it is impossible to easily estimate distributions of statistical estimators. When estimating the global performance of an NNM by estimating the MSE in a regression problem, for example, it is important to know the variance of the MSE. Bootstrap is one of the most important resampling techniques to estimate averages and variances, between other properties, of statistical estimators. In this tutorial, the application of resampling techniques (including bootstrap) to the evaluation of neural networks’ performance is explained from both a theoretical and practical point of view. The pseudo-code of the algorithms is provided to facilitate their implementation. Computational aspects, as the training time, are discussed, since resampling techniques always require simulations to be run many thousands of times and, therefore, are computationally intensive. A specific version of the bootstrap algorithm is presented that allows the estimation of the distribution of a statistical estimator when dealing with an NNM in a computationally effective way. Finally, algorithms are compared on both synthetically generated and real data to demonstrate their performance.


2020 ◽  
Vol 35 (6) ◽  
pp. 341-353
Author(s):  
Gennady A. Mikhailov ◽  
Natalya V. Tracheva ◽  
Sergey A. Ukhinov

AbstractIn the present paper, we propose a new combined kernel-projective statistical estimator of the two-dimensional distribution density, where the first ‘main’ variable is processed with the kernel estimator, and the second one is processed with the projective estimator for the conditional distribution density. In this case, statistically estimated coefficients of some orthogonal expansion of the conditional distribution density are used for each ‘kernel’ interval defined by a micro-sample. The root-mean-square optimization of such an estimator is performed under the assumptions concerning the convergence rate of the used orthogonal expansion. The numerical study of the constructed estimator is implemented for angular distributions of the radiation flux forward-scattered and backscattered by a layer of matter. A comparative analysis of the results is performed for molecular and aerosol scattering.


Author(s):  
Mohammad R. Khosravi ◽  
Sadegh Samadi

AbstractNowadays, industrial video synthetic aperture radars (ViSARs) are widely used for aerial remote sensing and surveillance systems in smart cities. A main challenge of a group of networked ViSAR sensors in an IoT-based environment is low bandwidth of wireless links for communicating big video data. In this research, we propose a non-linear statistical estimator for adaptive reconstruction of compressed ViSAR data. Our proposed reconstruction filter is based on an adaptively generated non-linear weight mask of spatial observations. It can strongly outperform several conventional and well-known reconstruction filters for three different video samples.


2019 ◽  
Vol 489 (1) ◽  
pp. 1321-1337 ◽  
Author(s):  
Adélie Gorce ◽  
Jonathan R Pritchard

ABSTRACT We present a new statistical tool, called the triangle correlation function (TCF), inspired by the earlier work of Obreschkow et al. It is derived from the three-point correlation function and aims to probe the characteristic scale of ionized regions during the epoch of reionization from 21cm interferometric observations. Unlike most works, which focus on power spectrum, i.e. amplitude information, our statistic is based on the information we can extract from the phases of the Fourier transform of the ionization field. In this perspective, it may benefit from the well-known interferometric concept of closure phases. We find that this statistical estimator performs very well on simple ionization fields. For example, with well-defined fully ionized discs, there is a peaking scale, which we can relate to the radius of the ionized bubbles. We explore the robustness of the TCF when observational effects such as angular resolution and noise are considered. We also get interesting results on fields generated by more elaborate simulations such as 21CMFAST. Although the variety of sources and ionized morphologies in the early stages of the process make its interpretation more challenging, the nature of the signal can tell us about the stage of reionization. Finally, and in contrast to other bubble size distribution algorithms, we show that the TCF can resolve two different characteristic scales in a given map.


2019 ◽  
Vol 3 (3) ◽  
pp. 499-499
Author(s):  
William D. Pearse ◽  
Charles C. Davis ◽  
David W. Inouye ◽  
Richard B. Primack ◽  
T. Jonathan Davies

2017 ◽  
Vol 1 (12) ◽  
pp. 1876-1882 ◽  
Author(s):  
William D. Pearse ◽  
Charles C. Davis ◽  
David W. Inouye ◽  
Richard B. Primack ◽  
T. Jonathan Davies

2017 ◽  
Vol 321 ◽  
pp. 132-143 ◽  
Author(s):  
Andrej Prošek ◽  
Boštjan Končar ◽  
Matjaž Leskovar

2017 ◽  
Vol 605 ◽  
pp. A27 ◽  
Author(s):  
C. Gouin ◽  
R. Gavazzi ◽  
S. Codis ◽  
C. Pichon ◽  
S. Peirani ◽  
...  

Context. Upcoming weak lensing surveys such as Euclid will provide an unprecedented opportunity to quantify the geometry and topology of the cosmic web, in particular in the vicinity of lensing clusters. Aims. Understanding the connectivity of the cosmic web with unbiased mass tracers, such as weak lensing, is of prime importance to probe the underlying cosmology, seek dynamical signatures of dark matter, and quantify environmental effects on galaxy formation. Methods. Mock catalogues of galaxy clusters are extracted from the N-body PLUS simulation. For each cluster, the aperture multipolar moments of the convergence are calculated in two annuli (inside and outside the virial radius). By stacking their modulus, a statistical estimator is built to characterise the angular mass distribution around clusters. The moments are compared to predictions from perturbation theory and spherical collapse. Results. The main weakly chromatic excess of multipolar power on large scales is understood as arising from the contraction of the primordial cosmic web driven by the growing potential well of the cluster. Besides this boost, the quadrupole prevails in the cluster (ellipsoidal) core, while at the outskirts, harmonic distortions are spread on small angular modes, and trace the non-linear sharpening of the filamentary structures. Predictions for the signal amplitude as a function of the cluster-centric distance, mass, and redshift are presented. The prospects of measuring this signal are estimated for current and future lensing data sets. Conclusions. The Euclid mission should provide all the necessary information for studying the cosmic evolution of the connectivity of the cosmic web around lensing clusters using multipolar moments and probing unique signatures of, for example, baryons and warm dark matter.


Sign in / Sign up

Export Citation Format

Share Document