Nonsampling Errors, Sampling Errors, and Design Effects for the Eight-Nation Survey

Author(s):  
John A. Booth ◽  
Mitchell A. Seligson
1978 ◽  
Vol 15 (4) ◽  
pp. 622-631 ◽  
Author(s):  
Robert M. Groves

The clustered telephone sample design described by Waksberg is compared with a design randomly generating four digit numbers within working prefixes. The clustered sample is found to increase the proportion of working household numbers selected from about 22% to over 55%, but sampling errors and design effects of the two sample designs show some loss of precision in the clustered design. A cost-variance model is constructed which provides estimates of desirable cluster sizes given varying amounts of intracluster homogeneity.


Author(s):  
Petra Jahn ◽  
Johannes Engelkamp

There is ample evidence that memory for action phrases such as “open the bottle” is better in subject-performed tasks (SPTs), i.e., if the participants perform the actions, than in verbal tasks (VTs), if they only read the phrases or listen to them. It is less clear whether also the sole intention to perform the actions later, i.e., a prospective memory task (PT), improves memory compared with VTs. Inconsistent findings have been reported for within-subjects and between-subjects designs. The present study attempts to clarify the situation. In three experiments, better recall for SPTs than for PTs and for PTs than for VTs were observed if mixed lists were used. If pure lists were used, there was a PT effect but no SPT over PT advantage. The findings were discussed from the perspective of item-specific and relational information.


2020 ◽  
Vol 2020 (1) ◽  
pp. 91-95
Author(s):  
Philipp Backes ◽  
Jan Fröhlich

Non-regular sampling is a well-known method to avoid aliasing in digital images. However, the vast majority of single sensor cameras use regular organized color filter arrays (CFAs), that require an optical-lowpass filter (OLPF) and sophisticated demosaicing algorithms to suppress sampling errors. In this paper a variety of non-regular sampling patterns are evaluated, and a new universal demosaicing algorithm based on the frequency selective reconstruction is presented. By simulating such sensors it is shown that images acquired with non-regular CFAs and no OLPF can lead to a similar image quality compared to their filtered and regular sampled counterparts. The MATLAB source code and results are available at: http://github. com/PhilippBackes/dFSR


2012 ◽  
Vol 22 (1) ◽  
pp. 69-80 ◽  
Author(s):  
Tingyong Fang ◽  
Jufen Yu ◽  
Jing Wang
Keyword(s):  

2021 ◽  
Vol 11 (10) ◽  
pp. 4344
Author(s):  
Kuen-Suan Chen ◽  
Shui-Chuan Chen ◽  
Ting-Hsin Hsu ◽  
Min-Yi Lin ◽  
Chih-Feng Wu

The Taguchi capability index, which reflects the expected loss and the yield of a process, is a useful index for evaluating the quality of a process. Several scholars have proposed a process improvement capability index based on the expected value of the Taguchi loss function as well as the corresponding cost of process improvement. There have been a number of studies using the Taguchi capability index to develop suppliers’ process quality evaluation models, whereas models for evaluating suppliers’ process improvement potential have been relatively lacking. Thus, this study applies the process improvement capability index to develop an evaluation model of the supplier’s process improvement capability, which can be provided to the industry for application. Besides, owing to the current need to respond quickly, coupled with cost considerations and the limits of technical capabilities, the sample size for sampling testing is usually not large. Consequently, the evaluation model of the process improvement capability developed in this study adopts a fuzzy testing method based on the confidence interval. This method reduces the risk of misjudgment due to sampling errors and improves the testing accuracy because it can incorporate experts and their accumulated experiences.


Author(s):  
Xiao Dai ◽  
Mark J Ducey ◽  
Haozhou Wang ◽  
Ting-Ru Yang ◽  
Yung-Han Hsu ◽  
...  

Abstract Efficient subsampling designs reduce forest inventory costs by focusing sampling efforts on more variable forest attributes. Sector subsampling is an efficient and accurate alternative to big basal area factor (big BAF) sampling to estimate the mean basal area to biomass ratio. In this study, we apply sector subsampling of spherical images to estimate aboveground biomass and compare our image-based estimates with field data collected from three early spacing trials on western Newfoundland Island in eastern Canada. The results show that sector subsampling of spherical images produced increased sampling errors of 0.3–3.4 per cent with only about 60 trees measured across 30 spherical images compared with about 4000 trees measured in the field. Photo-derived basal area was underestimated because of occluded trees; however, we implemented an additional level of subsampling, collecting field-based basal area counts, to correct for bias due to occluded trees. We applied Bruce’s formula for standard error estimation to our three-level hierarchical subsampling scheme and showed that Bruce’s formula is generalizable to any dimension of hierarchical subsampling. Spherical images are easily and quickly captured in the field using a consumer-grade 360° camera and sector subsampling, including all individual tree measurements, were obtained using a custom-developed python software package. The system is an efficient and accurate photo-based alternative to field-based big BAF subsampling.


1996 ◽  
Vol 16 (4) ◽  
pp. 650-658 ◽  
Author(s):  
Carolyn Cidis Meltzer ◽  
Jon Kar Zubieta ◽  
Jonathan M. Links ◽  
Paul Brakeman ◽  
Martin J. Stumpf ◽  
...  

Partial volume and mixed tissue sampling errors can cause significant inaccuracy in quantitative positron emission tomographic (PET) measurements. We previously described a method of correcting PET data for the effects of partial volume averaging on gray matter (GM) quantitation; however, this method may incompletely correct GM structures when local tissue concentrations are highly heterogeneous. We have extended this three-compartment algorithm to include a fourth compartment: a GM volume of interest (VOI) that can be delineated on magnetic resonance (MR) imaging. Computer simulations of PET images created from human MR data demonstrated errors of up to 120% in assigned activity values in small brain structures in uncorrected data. Four-compartment correction achieved full recovery of a wide range of coded activity in GM VOIs such as the amygdala, caudate, and thalamus. Further validation was performed in an agarose brain phantom in actual PET acquisitions. Implementation of this partial volume correction approach in [18F]fluorodeoxyglucose and [11C]-carfentanil PET data acquired in a healthy elderly human subject was also performed. This newly developed MR-based partial volume correction algorithm permits the accurate determination of the true radioactivity concentration in specific structures that can be defined by MR by accounting for the influence of heterogeneity of GM radioactivity.


Sign in / Sign up

Export Citation Format

Share Document