empirical distributions
Recently Published Documents


TOTAL DOCUMENTS

276
(FIVE YEARS 59)

H-INDEX

26
(FIVE YEARS 2)

2021 ◽  
Vol 94, 2021 (94) ◽  
pp. 5-12
Author(s):  
Petro Dvulit ◽  
◽  
Stepan Savchuk ◽  
Iryna Sosonka ◽  
◽  
...  

The aim of the research is to diagnose the metrological characteristics of high-precision GNSS-observations by methods of non-classical error theory of measurements (NETM) based on Ukrainian reference stations. Methodology. We selected 72 GNSS reference stations, downloaded daily observation files from the LPI analysis center server, and created time series in the topocentric coordinate system. The duration of the time series is almost two years (March 24, 2019 - January 2, 2021). Using a specialized software package, the time series have been cleaned of offsets and breaks, seasonal effects, and the trend component has been removed. Verification of empirical distributions of errors was provided by the procedure of NETM on the recommendations offered by G. Jeffries and on the principles of hypothesis tests the theory according to Pearson's criterion. The main result of the research. It is established that the obtained time series of coordinates of reference GNSS stations do not confirm the hypothesis of their conformity to the normal Gaussian distribution law. NETM diagnostics of the accuracy of high-precision GNSS measurements, which is based on the use of confidence intervals for assessing the asymmetry and kurtosis of a significant sample, followed by the Pearson test, confirms the presence of weak, not removed from GNSS-processing, sources of systematic errors. Scientific novelty. The authors use the possibility of NETM to improve the processing of high-precision GNSS measurements and the need to take into account the sources of systematic errors. Failure to take into account certain factors creates the effect of shifting the time coordinate series, which, in turn, leads to subjective estimates of station velocity, i.e. their geodynamic interpretation. Practical significance. Research of the reasons for deviations of errors distribution from the established norms provides metrological literacy of carrying out high-precision GNSS measurements of large samples.


Radiocarbon ◽  
2021 ◽  
pp. 1-22
Author(s):  
Nicholas V Kessler

ABSTRACT Age disparities between charcoal samples and their context are a well-known problem in archaeological chronometry, and even small offsets could affect the accuracy of high-precision wiggle-matched dates. In many cases of taphonomic or anthropogenic loss of the outermost rings, sapwood-based methods for estimating cutting dates are not always applicable, especially with charcoal. In these instances, wiggle-matched terminus post quem (TPQ) dates are often reconciled with subjective or ad hoc approaches. This study examines the distribution of age disparities caused by ring loss and other factors in a large dendroarchaeological dataset. Probability density functions describing the random distribution of age disparities are then fit to the empirical distributions. These functions are tested on an actual wiggle-matched non-cutting date from the literature to evaluate accuracy in a single case. Simulations are then presented to demonstrate how an age offset function can be applied in OxCal outlier models to yield accurate dating in archaeological sequences with short intervals between dated episodes, even if all samples are non-cutting dates.


2021 ◽  
Vol 68 (2) ◽  
pp. 38-52
Author(s):  
Dominik Krężołek

In this paper, we present a modification of the Weibull distribution for the Value-at- Risk (VaR) estimation of investment portfolios on the precious metals market. The reason for using the Weibull distribution is the similarity of its shape to that of empirical distributions of metals returns. These distributions are unimodal, leptokurtic and have heavy tails. A portfolio analysis is carried out based on daily log-returns of four precious metals quoted on the London Metal Exchange: gold, silver, platinum and palladium. The estimates of VaR calculated using GARCH-type models with non-classical error distributions are compared with the empirical estimates. The preliminary analysis proves that using conditional models based on the modified Weibull distribution to forecast values of VaR is fully justified.


Author(s):  
S. Petros'yan ◽  
Z. Ryabikina ◽  
N. Gubanova ◽  
S. Simavoryan

The article presents the results of standardization of the "Personality’s Agency Activity Profile" methodology. This methodology is the author's multidimensional personality questionnaire, with confirmed validity and reliability, and is aimed at discovering the personality positioning system that determines the nature and orientation of the agent’s activity. The problems of standardization of personal psychological questionnaires are raised, in particular, the problems of the relativity of the norms obtained during standardization and the need for additional indicators to receive a correct psychodiagnostic interpretation. A step-by-step scheme for standardizing the methodology scales has been implemented on a sample of 958 people, including a test for the normality and homogeneity of empirical distributions. The need for differentiated standardization of the methodology depending on the gender and age of the respondents has been substantiated using the methods of mathematical statistics. For the middle-aged (25-42 years old) part of the sample, tables for converting raw test scores to stanines have been built up and presented.


2021 ◽  
pp. 2250001
Author(s):  
Andrew J. Collins ◽  
Sheida Etemadidavan ◽  
Wael Khallouli

Hedonic games have gained popularity over the last two decades, leading to several research articles that have used analytical methods to understand their properties better. In this paper, a Monte Carlo method, a numerical approach, is used instead. Our method includes a technique for representing, and generating, random hedonic games. We were able to create and solve, using core stability, millions of hedonic games with up to 16 players. Empirical distributions of the hedonic games’ core sizes were generated, using our results, and analyzed for games of up to 13 players. Results from games of 14–16 players were used to validate our research findings. Our results indicate that core partition size might follow the gamma distribution for games with a large number of players.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Nicholas P. Howard ◽  
Cameron Peace ◽  
Kevin A. T. Silverstein ◽  
Ana Poets ◽  
James J. Luby ◽  
...  

AbstractPedigree information is of fundamental importance in breeding programs and related genetics efforts. However, many individuals have unknown pedigrees. While methods to identify and confirm direct parent–offspring relationships are routine, those for other types of close relationships have yet to be effectively and widely implemented with plants, due to complications such as asexual propagation and extensive inbreeding. The objective of this study was to develop and demonstrate methods that support complex pedigree reconstruction via the total length of identical by state haplotypes (referred to in this study as “summed potential lengths of shared haplotypes”, SPLoSH). A custom Python script, HapShared, was developed to generate SPLoSH data in apple and sweet cherry. HapShared was used to establish empirical distributions of SPLoSH data for known relationships in these crops. These distributions were then used to estimate previously unknown relationships. Case studies in each crop demonstrated various pedigree reconstruction scenarios using SPLoSH data. For cherry, a full-sib relationship was deduced for ‘Emperor Francis, and ‘Schmidt’, a half-sib relationship for ‘Van’ and ‘Windsor’, and the paternal grandparents of ‘Stella’ were confirmed. For apple, 29 cultivars were found to share an unknown parent, the pedigree of the unknown parent of ‘Cox’s Pomona’ was reconstructed, and ‘Fameuse’ was deduced to be a likely grandparent of ‘McIntosh’. Key genetic resources that enabled this empirical study were large genome-wide SNP array datasets, integrated genetic maps, and previously identified pedigree relationships. Crops with similar resources are also expected to benefit from using HapShared for empowering pedigree reconstruction.


2021 ◽  
Vol 8 (8) ◽  
pp. 201844
Author(s):  
Sarah C. Maaß ◽  
Joost de Jong ◽  
Leendert van Maanen ◽  
Hedderik van Rijn

In a world that is uncertain and noisy, perception makes use of optimization procedures that rely on the statistical properties of previous experiences. A well-known example of this phenomenon is the central tendency effect observed in many psychophysical modalities. For example, in interval timing tasks, previous experiences influence the current percept, pulling behavioural responses towards the mean. In Bayesian observer models, these previous experiences are typically modelled by unimodal statistical distributions, referred to as the prior. Here, we critically assess the validity of the assumptions underlying these models and propose a model that allows for more flexible, yet conceptually more plausible, modelling of empirical distributions. By representing previous experiences as a mixture of lognormal distributions, this model can be parametrized to mimic different unimodal distributions and thus extends previous instantiations of Bayesian observer models. We fit the mixture lognormal model to published interval timing data of healthy young adults and a clinical population of aged mild cognitive impairment patients and age-matched controls, and demonstrate that this model better explains behavioural data and provides new insights into the mechanisms that underlie the behaviour of a memory-affected clinical population.


2021 ◽  
Author(s):  
Aziz Fouche ◽  
Andrei Zinovyev

A formulation of the dataset integration problem describes the task of aligning two or more empirical distributions sampled from sources of the same kind, so that records of similar object end up close to one another. We propose a variant of the optimal transport- and Gromov-Wasserstein-based dataset integration algorithm introduced in SCOT. We formulate a constrained quadratic program to adjust sample weights before OT or GW so that weighted point density is close to be uniform over the point cloud, for a given kernel. We test this method with one synthetic and two real-life datasets from single-cell biology. Weights adjustment allows distributions with similar effective supports but different local densities to be reliably integrated, which is not always the case with the original method. This approach is entirely unsupervised, scales well to thousands of samples and does not depend on dimensionality of the ambient space, which makes it efficient for the analysis of single-cell datasets in biology. We provide an open-source implementation of this method in a Python package, woti.


Sign in / Sign up

Export Citation Format

Share Document