scholarly journals Evaluating Sampling Designs for Assessing the Accuracy of Cropland Extent Maps in Different Cropland Proportion Regions

Author(s):  
Kamini Yadav ◽  
Russell G. Congalton

The GFSAD30m cropland extent map has been recently produced at a spatial resolution of 30m as a part of NASA MEaSUREs’ Program Global Food Security Data Analysis (GFSAD) project. Accuracy assessment of this GFSAD30m cropland extent map was initially performed using an assessment strategy involving a simple random sampling (SRS) design and an optimum sample size of 250 for each of 72 different regions around the world. However, while statistically valid, this sampling design was not effective in regions of low cropland proportion (LCP) of less than 15% cropland area proportion (CAP). The SRS sampling resulted in an insufficient number of samples for the rare cropland class due to low cropland distribution, proportion, and pattern. Therefore, given our objective of effectively assessing the cropland extent map in these LCP regions, the use of an alternate sampling design was necessary. A stratified random sampling design was applied using a predetermined minimum number of samples followed by a proportional distribution (i.e., SMPS) for different cropland proportion regions to achieve sufficient sample size of the rare cropland map class and appropriate accuracy measures. The SRS and SMPS designs were compared at a common optimum sample size of 250 which was determined using a sample simulation analysis in ten different cropland proportion regions. The results demonstrate that the two sampling designs performed differently in the various cropland proportion regions and therefore, must be selected according to the cropland extent maps to be assessed.

2016 ◽  
Vol 48 (1) ◽  
pp. 23
Author(s):  
A. Arbab ◽  
F. Mirphakhar

The distribution of adult and larvae <em>Bactrocera oleae</em> (Diptera: Tephritidae), a key pest of olive, was studied in olive orchards. The first objective was to analyze the dispersion of this insect on olive and the second was to develop sampling plans based on fixed levels of precision for estimating <em>B. oleae</em> populations. The Taylor’s power law and Iwao’s patchiness regression models were used to analyze the data. Our results document that Iwao’s patchiness provided a better description between variance and mean density. Taylor’s <em>b</em> and Iwao’s <em>β</em> were both significantly more than 1, indicating that adults and larvae had aggregated spatial distribution. This result was further supported by the calculated common <em>k</em> of 2.17 and 4.76 for adult and larvae, respectively. Iwao’s a for larvae was significantly less than 0, indicating that the basic distribution component of <em>B. oleae</em> is the individual insect. Optimal sample sizes for fixed precision levels of 0.10 and 0.25 were estimated with Iwao’s patchiness coefficients. The optimum sample size for adult and larvae fluctuated throughout the seasons and depended upon the fly density and desired level of precision. For adult, this generally ranged from 2 to 11 and 7 to 15 traps to achieve precision levels of 0.25 and 0.10, respectively. With respect to optimum sample size, the developed fixed-precision sequential sampling plans was suitable for estimating flies density at a precision level of D=0.25. Sampling plans, presented here, should be a tool for research on pest management decisions of <em>B. oleae</em>.


2018 ◽  
Vol 4 (2) ◽  
pp. 205630511877283 ◽  
Author(s):  
Hwalbin Kim ◽  
S. Mo Jang ◽  
Sei-Hill Kim ◽  
Anan Wan

Despite the existing evaluation of the sampling options for periodical media content, only a few empirical studies have examined whether probability sampling methods can be applicable to social media content other than simple random sampling. This article tests the efficiency of simple random sampling and constructed week sampling, by varying the sample size of Twitter content related to the 2014 South Carolina gubernatorial election. We examine how many weeks were needed to adequately represent 5 months of tweets. Our findings show that a simple random sampling is more efficient than a constructed week sampling in terms of obtaining a more efficient and representative sample of Twitter data. This study also suggests that it is necessary to produce a sufficient sample size when analyzing social media content.


Crop Science ◽  
1977 ◽  
Vol 17 (6) ◽  
pp. 973-975
Author(s):  
G. Atashi‐Rang ◽  
K. A. Lucken

Biometrics ◽  
1961 ◽  
Vol 17 (4) ◽  
pp. 617 ◽  
Author(s):  
A. W. Nordskog ◽  
H. T. David ◽  
H. B. Eisenberg

Author(s):  
Augusto Hernandez-Solis ◽  
Christian Ekberg ◽  
Arvid O¨dega˚rd Jensen ◽  
Christophe Demaziere ◽  
Ulf Bredolt

In recent years, a more realistic safety analysis of nuclear reactors has been based on best estimate (BE) computer codes. Because their predictions are unavoidably affected by conceptual, aleatory and experimental sources of uncertainty, an uncertainty analysis is needed if useful conclusions are to be obtained from BE codes. In this paper, statistical uncertainty analyses of cross-sectional averaged void fraction calculations using the POLCA-T system code, and based on the BWR Full-Size Fine-Mesh Bundle Test (BFBT) benchmark are presented by means of two different sampling strategies: Latin Hypercube (LHS) and Simple Random Sampling (SRS). LHS has the property of densely stratifying across the range of each input probability distribution, allowing a much better coverage of the input uncertainties than SRS. The aim here is to compare both uncertainty analyses on the BWR assembly void axial profile prediction in steady-state, and on the transient void fraction prediction at a certain axial level coming from a simulated re-circulation pump trip scenario. It is shown that the replicated void fraction mean (either in steady-state or transient conditions) has less variability when using LHS than SRS for the same number of calculations (i.e. same input space sample size) even if the resulting void fraction axial profiles are non-monotonic. It is also shown that the void fraction uncertainty limits achieved with SRS by running 458 calculations (sample size required to cover 95% of 8 uncertain input parameters with a 95% confidence), result in the same uncertainty limits achieved by LHS with only 100 calculations. These are thus clear indications on the advantages of using LHS.


Sign in / Sign up

Export Citation Format

Share Document