sample allocation
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 19)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 37 (3) ◽  
pp. 655-671
Author(s):  
Paolo Righi ◽  
Piero Demetrio Falorsi ◽  
Stefano Daddi ◽  
Epifania Fiorello ◽  
Pierpaolo Massoli ◽  
...  

Abstract For the first time in 2018 the Italian Institute of Statistics (Istat) implemented the annual Permanent Population Census which relies on the Population Base Register (PBR) and the Population Coverage Survey (PCS). This article provides a general overview of the PCS sampling design, which makes use of the PBR to correct population counts with the extended dual system estimator (Nirel and Glickman 2009). The sample allocation, proven optimal under a set of precision constraints, is based on preliminary estimates of individual probabilities of over-coverage and under-coverage. It defines the expected sample size in terms of individuals, and it oversamples the sub-populations subject to the risk of under/over coverage. Finally, the article introduces a sample selection method, which to the greatest extent possible satisfies the planned allocation of persons in terms of socio-demographic characteristics. Under acceptable assumptions, the article also shows that the sampling strategy enhances the precision of the estimates.


2021 ◽  
pp. 1250-1258
Author(s):  
Yilin Wu ◽  
Huei-Chung Huang ◽  
Li-Xuan Qin

PURPOSE Accurate assessment of a molecular classifier that guides patient care is of paramount importance in precision oncology. Recent years have seen an increasing use of external validation for such assessment. However, little is known about how it is affected by ubiquitous unwanted variations in test data because of disparate experimental handling and by the use of data normalization for alleviating such variations. METHODS In this paper, we studied these issues using two microarray data sets for the same set of tumor samples and additional data simulated by resampling under various levels of signal-to-noise ratio and different designs for array-to-sample allocation. RESULTS We showed that (1) unwanted variations can lead to biased classifier assessment and (2) data normalization mitigates the bias to varying extents depending on the specific method used. In particular, frozen normalization methods for test data outperform their conventional forms in terms of both reducing the bias in accuracy estimation and increasing robustness to handling effects. We make available our benchmarking tool as an R package on GitHub for performing such evaluation on additional methods for normalization and classification. CONCLUSION Our findings thus highlight the importance of proper test-data normalization for valid assessment by external validation and call for caution on the choice of normalization method for molecular classifier development.


2020 ◽  
pp. 0193841X2097733
Author(s):  
Larry L. Orr ◽  
Daniel Gubits

In this article, we explore the reasons why multiarm trials have been conducted and the design and analysis issues they involve. We point to three fundamental reasons for such designs: (1) Multiarm designs allow the estimation of “response surfaces”—that is, the variation in response to an intervention across a range of one or more continuous policy parameters. (2) Multiarm designs are an efficient way to test multiple policy approaches to the same social problem simultaneously, either to compare the effects of the different approaches or to estimate the effect of each separately. (3) Multiarm designs may allow for the estimation of the separate and combined effects of discrete program components. We illustrate each of these objectives with examples from the history of public policy experimentation over the past 50 years and discuss some design and analysis issues raised by each, including sample allocation, statistical power, multiple comparisons, and alignment of analysis with goals of the evaluation.


2020 ◽  
Author(s):  
Mohammad Saadati ◽  
Abbasali Dorosti ◽  
Mahin Dahim ◽  
Elnaz Hemmati ◽  
Monireheh Edalatzadeh ◽  
...  

Abstract Background: Pedestrian bridges are safe tools for street crossing. Bridges structural characteristics, locating and also pedestrian perception and attitudes may affect their bridge use behavior. The aim of this study was to identify the factors influencing use/non-use of pedestrian bridges in Tabriz, Iran. Methods: Using cross-sectional approach, we have conducted a study in Tabriz, 2019. Through a pilot study data and Cochrane formula, the sample size was estimated to be 360. Sampling was done using simple random sampling method. Pedestrians around two types of the bridges with/whiteout escalator were included in the study. Sample allocation was done equally among bridges. A valid questionnaire (CVI= 0.78, α=0.75) was used for data collection. Data was analyzed using SPSS 21. Results: Totally 358 people were participated with an average age of 29±11.6 years. More than 72% of the participants had driving license and about a quarter of them had a crash history. Nearly 10% declared that they use pedestrian bridge sometimes or never. About 43% believed that bridge using is necessary only in crowded streets. Locating issues including bridge distance from zebra lines and pedestrian destination, lack of escalator, bridge darkness at nighttime and pedestrian perception of bridge safety were the main barriers to use bridges. Having a driving license and education level were significantly associated with pedestrian bridge use behavior (p<0.05). Conclusions: Designing pedestrian bridges using artistic principles will create more sense of safety and positive perception which will facilitate bridges utility. Future developments in pedestrian safety initiatives should consider effective countermeasures which influence pedestrian safe behavior such as bridge use.


Author(s):  
Chun-Chih Chiu ◽  
James T. Lin

Simulation has been applied to evaluate system performance even when the target system does not exist in practice. Dealing with model fidelity is required to apply simulation to practice. A high-fidelity (HF) simulation model is generally more accurate and requires more computational resources than a low-fidelity (LF) one. A low-fidelity model may have less accuracy than a HF one, but it can rapidly evaluate a design alternative. Consequently, the performance accuracy of the constructed simulation model and its computational cost involves a tradeoff. In this research, the simulation optimization problem under a large design space, where a LF model may not be able to evaluate all design alternatives in the limited computational resource, is studied. We extended multifidelity (MF) optimization with ordinal transformation and optimal sampling (MO2TOS), which enables the use of LF models to search for a HF one efficiently, and proposed a combination of the genetic algorithm and MO2TOS. A novel optimal sample allocation strategy called MO2TOSAS was proposed to improve search efficiency. We applied the proposed methods to two experiments on MF function optimization and a simultaneous scheduling problem of machine and vehicles (SSPMV) in flexible manufacturing systems. In SSPMV, we developed three fidelity simulation models that capture important characteristics, including the preventive deadlock situation of vehicles and alternative machines. Simulation results show that the combination of more than one fidelity level of simulation models can improve search efficiency and reduce computational costs.


Sign in / Sign up

Export Citation Format

Share Document