Estimation of Basic Quantities for Other Sampling Schemes

Author(s):  
John P. Klein ◽  
Melvin L. Moeschberger
Keyword(s):  
Methodology ◽  
2012 ◽  
Vol 8 (2) ◽  
pp. 71-80 ◽  
Author(s):  
Juan Botella ◽  
Manuel Suero

In Reliability Generalization (RG) meta-analyses, the importance of bearing in mind the problems of range restriction or biased sampling and their influence on reliability estimation has often been highlighted. Nevertheless, the presence of heterogeneous variances in the included studies has been diagnosed in a subjective way and has not been taken into account in later analyses. Procedures to detect the presence of a variety of sampling schemes and to manage them in the analyses are proposed. The procedures are further explained with an example, by applying them to 25 estimates of Cronbach’s alpha coefficient in the Hamilton Scale for Depression.


2014 ◽  
Vol 22 (2) ◽  
pp. 217-224
Author(s):  
Houlong JIANG ◽  
Shuduan LIU ◽  
Anding XU ◽  
Chao YANG

2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Xichuan Liu ◽  
Taichang Gao ◽  
Yuntao Hu ◽  
Xiaojian Shu

In order to improve the measurement of precipitation microphysical characteristics sensor (PMCS), the sampling process of raindrops by PMCS based on a particle-by-particle Monte-Carlo model was simulated to discuss the effect of different bin sizes on DSD measurement, and the optimum sampling bin sizes for PMCS were proposed based on the simulation results. The simulation results of five sampling schemes of bin sizes in four rain-rate categories show that the raw capture DSD has a significant fluctuation variation influenced by the capture probability, whereas the appropriate sampling bin size and width can reduce the impact of variation of raindrop number on DSD shape. A field measurement of a PMCS, an OTT PARSIVEL disdrometer, and a tipping bucket rain Gauge shows that the rain-rate and rainfall accumulations have good consistencies between PMCS, OTT, and Gauge; the DSD obtained by PMCS and OTT has a good agreement; the probability of N0, μ, and Λ shows that there is a good agreement between the Gamma parameters of PMCS and OTT; the fitted μ-Λ and Z-R relationship measured by PMCS is close to that measured by OTT, which validates the performance of PMCS on rain-rate, rainfall accumulation, and DSD related parameters.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 849
Author(s):  
Manasa Tatipalli ◽  
Vijay Kumar Siripuram ◽  
Tao Long ◽  
Diana Shuster ◽  
Galina Bernstein ◽  
...  

Quantitative pharmacology brings important advantages in the design and conduct of pediatric clinical trials. Herein, we demonstrate the application of a model-based approach to select doses and pharmacokinetic sampling scenarios for the clinical evaluation of a novel oral suspension of spironolactone in pediatric patients with edema. A population pharmacokinetic model was developed and qualified for spironolactone and its metabolite, canrenone, using data from adults and bridged to pediatrics (2 to <17 years old) using allometric scaling. The model was then used via simulation to explore different dosing and sampling scenarios. Doses of 0.5 and 1.5 mg/kg led to target exposures (i.e., similar to 25 and 100 mg of the reference product in adults) in all the reference pediatric ages (i.e., 2, 6, 12 and 17 years). Additionally, two different sampling scenarios were delineated to accommodate patients into sparse sampling schemes informative to characterize drug pharmacokinetics while minimizing phlebotomy and burden to participating children.


Author(s):  
Nils Damaschke ◽  
Volker Kühn ◽  
Holger Nobach

AbstractThe prediction and correction of systematic errors in direct spectral estimation from irregularly sampled data taken from a stochastic process is investigated. Different sampling schemes are investigated, which lead to such an irregular sampling of the observed process. Both kinds of sampling schemes are considered, stochastic sampling with non-equidistant sampling intervals from a continuous distribution and, on the other hand, nominally equidistant sampling with missing individual samples yielding a discrete distribution of sampling intervals. For both distributions of sampling intervals, continuous and discrete, different sampling rules are investigated. On the one hand, purely random and independent sampling times are considered. This is given only in those cases, where the occurrence of one sample at a certain time has no influence on other samples in the sequence. This excludes any preferred delay intervals or external selection processes, which introduce correlations between the sampling instances. On the other hand, sampling schemes with interdependency and thus correlation between the individual sampling instances are investigated. This is given whenever the occurrence of one sample in any way influences further sampling instances, e.g., any recovery times after one instance, any preferences of sampling intervals including, e.g., sampling jitter or any external source with correlation influencing the validity of samples. A bias-free estimation of the spectral content of the observed random process from such irregularly sampled data is the goal of this investigation.


2021 ◽  
Vol 1043 (3) ◽  
pp. 032072
Author(s):  
Zhewen Li ◽  
Guixiang Shen ◽  
Yingzhi Zhang ◽  
Liming Mu ◽  
Jun Zheng

Author(s):  
Jack Poulson

Determinantal point processes (DPPs) were introduced by Macchi (Macchi 1975 Adv. Appl. Probab. 7 , 83–122) as a model for repulsive (fermionic) particle distributions. But their recent popularization is largely due to their usefulness for encouraging diversity in the final stage of a recommender system (Kulesza & Taskar 2012 Found. Trends Mach. Learn. 5 , 123–286). The standard sampling scheme for finite DPPs is a spectral decomposition followed by an equivalent of a randomly diagonally pivoted Cholesky factorization of an orthogonal projection, which is only applicable to Hermitian kernels and has an expensive set-up cost. Researchers Launay et al. 2018 ( http://arxiv.org/abs/1802.08429 ); Chen & Zhang 2018 NeurIPS ( https://papers.nips.cc/paper/7805-fast-greedy-map-inference-for-determinantal-point-process-to-improve-recommendation-diversity.pdf ) have begun to connect DPP sampling to LDL H factorizations as a means of avoiding the initial spectral decomposition, but existing approaches have only outperformed the spectral decomposition approach in special circumstances, where the number of kept modes is a small percentage of the ground set size. This article proves that trivial modifications of LU and LDL H factorizations yield efficient direct sampling schemes for non-Hermitian and Hermitian DPP kernels, respectively. Furthermore, it is experimentally shown that even dynamically scheduled, shared-memory parallelizations of high-performance dense and sparse-direct factorizations can be trivially modified to yield DPP sampling schemes with essentially identical performance. The software developed as part of this research, Catamari ( hodgestar.com/catamari ) is released under the Mozilla Public License v.2.0. It contains header-only, C++14 plus OpenMP 4.0 implementations of dense and sparse-direct, Hermitian and non-Hermitian DPP samplers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


1989 ◽  
Vol 46 (12) ◽  
pp. 2157-2165 ◽  
Author(s):  
Steven P. Ferraro ◽  
Faith A. Cole ◽  
Waldemar A. DeBen ◽  
Richard C. Swartz

Power-cost efficiency (PCEi = (n × c)min/(ni × ci), where i = sampling scheme, n = minimum number of replicate samples needed to detect a difference between locations with an acceptable probability of Type I (α) and Type II (β) error (e.g. α = β = 0.05), c = mean "cost," in time or money, per replicate sample, and (n × c)min = minimum value of (n × c) among the i sampling schemes) is the appropriate expression for comparing the cost efficiency of alternative sampling schemes having equivalent statistical rigor when the statistical model is a redistribution for comparisons of two means. PCEs were determined for eight macrobenthic sampling schemes (four sample unit sizes and two sieve mesh sizes) in a comparison of a reference site versus a putative polluted site in Puget Sound, Washington. Laboratory processing times were, on average, about 2.5 times greater for the [Formula: see text]- than the [Formula: see text] samples. The 0.06-m2, 0- to 8-cm-deep sample unit size and 1.0-mm sieve mesh size was the overall optimum sampling scheme in this study; it ranked first in PCE on 8 and second on 3 of 11 measures of community structure. Rank order by statistical power of the 11 measures for this scheme was Infaunal Index > log10 (mollusc biomass + 1) > number of species > log10 (numerical abundance) > log10 (polychaete biomass + 1) > log10 (total biomass + 1) > log10 (crustacean biomass + 1) > McIntosh's index > 1 – Simpson's Index > Shannon's Index > Dominance Index.


Sign in / Sign up

Export Citation Format

Share Document