Observations on Spatial Distribution and the Relative Precision of Systematic Sampling

1971 ◽  
Vol 1 (4) ◽  
pp. 216-222 ◽  
Author(s):  
Bijan Payandeh ◽  
Alan R. Ek

In this paper, several observations are presented to point out the behavior of systematic sampling in forest inventory. A brief literature review and the results of empirical studies indicate that the relative precision of two-dimensional systematic sampling for a given forest stand varies with the variable of interest, spatial distribution of trees, plot size, and sampling intensity. More specifically, results of this study indicate that systematic sampling performs better than or as well as simple random procedures for clustered or near-randomly distributed forest populations for tree frequency estimation. For uniformly spaced populations such as plantations, however, simple random sampling should be most precise for tree frequency estimation. With basal area, results were less clear but considering this study plus others reported in the literature, systematic sampling should usually perform as well as or better than simple random procedures for most tree populations.

2021 ◽  
Author(s):  
Rafael Boluda ◽  
Luis Roca Pérez ◽  
Joaquín Ramos Miras ◽  
José A. Rodríguez Martín ◽  
Jaume Bech Borras

<p>Mercury (Hg) is a metal potentially dangerous that can accumulate in soils, move to plants and cause significant ecotoxicological risks. The province of Valencia is the third in Spain and has a great agricultural, industrial and tourist vocation; it has an area of 10,763 km<sup>2</sup>, of which it devotes 272,978 ha to cultivation, most of which are irrigated soils. To the south of the city of Valencia, is the Albufera Natural Park (ZEPA area and Ramsar wetland) with 14,806 ha dedicated to rice cultivation. Pollution and burning of rice straw in rice paddies are serious problems. Therefore, the concentration of Hg in agricultural soils in the province of Valencia according to use, with an emphasis on rice paddy soils, and spatial distribution were determined; and the effects of rice straw burning on Hg accumulation on rice paddy soils was assessed. Systematic sampling was carried out throughout the agricultural area at an intensity of a grid of 8 x 8 km, in which samples composed of soil between 0 and 20 cm were collected in a total of 98 plots; and a simple random sampling in the case of rice paddies in 35 sites, distinguishing between plots where the incineration of rice straw was carried out and where it was not. The concentration of Hg was determined with a direct DMA-80 Milestone analyzer in the previously pulverized sample. The detection limit was 1.0 g kg<sup>-1</sup>, the recovery was 95.1% to 101.0% ± 4.0%. The analyses were performed in triplicate. A basic descriptive statistic (means, medians, deviations, and ANOVA) was performed. Samples were grouped according to land use. For geostatistic analysis and in order to obtain the map of the spatial distribution of the concentration of Hg in soils, the classical geostatistic technique was used by ordinary kriging. The concentration of Hg in the soils of the province of Valencia showed great variability. The soils of the rice paddies together with those dedicated to the cultivation of citrus and horticultural of the coastal plain, showed the highest levels of Hg, in contrast to the soils of the interior areas dedicated to dry crops (vineyards, olive, almond and fodder). Spatial analysis reflected a concentration gradient from west to east, suggesting that the Hg in the soils of the interior has a geochemical origin, while in the coast soils it is of anthropic origin. On the other hand, it was observed that the burning of rice straw increased the Hg concentration in rice paddy soils. This research is the first information on the distribution of Hg in the soils of the province of Valencia and a contribution that can help weigh the effects of open burning of rice straw on Valencian rice paddies.</p>


Author(s):  
Joseph Hitimana ◽  
James Legilisho Ole Kiyiapi ◽  
Balozi Kirongo Bekuta

Forest measurements, especially in natural forests are cumbersome and complex. 100% enumeration is costly and inefficient. This study sought to find out reliable, efficient and cost-effective sampling schemes for use in tropical rain forest (TRF), moist montane forest (MMF) and dry woodland forest (DWF) in Kenya. Forty-eight sampling schemes (each combining sampling intensity (5, 10, 20, 30%), plot size (25, 50, 100, 400 m2) and sampling technique (simple random sampling, systematic sampling along North-South and along East-West orientations) were generated for testing estimates of forest attributes such as regeneration through simulation using R-software. Sampling error and effort were used to measure efficiency of each sampling scheme in relation to actual values. Though forest sites differed in biophysical characteristics, cost of sampling increased with decreasing plot size regardless of the forest type and attribute. Accuracy of inventory increased with decreasing plot size. Plot sizes that captured inherent variability were 5mx5m for regeneration and trees ha-1 across forest types but varied between forest types for basal area. Different sampling schemes were ranked for relative efficiency through simulation techniques, using regeneration as an example. In many instances systematic sampling-based sampling schemes were most effective. Sub-sampling in one-hectare forest unit gave reliable results in TRF (e.g. SSV-5mx5m-30%) and DWF (e.g. SSV-10mx10m-30%) but not in MMF (5mx5m-100%). One-hectare-complete-inventory method was found inevitable for regeneration assessment in montane forest.


1992 ◽  
Vol 9 (1) ◽  
pp. 3-6 ◽  
Author(s):  
Robin M. Reich ◽  
Loukas G. Arvanitis

Abstract The relationship between plot size and sample variance as affected by the spatial patterns of trees, volume, and basal area is reported. This information is useful for practicing foresters in determining the best combination of sample size, plot area, or basal area factor in forest surveys. North. J. Appl. For. 9(1):3-6.


2000 ◽  
Vol 5 (1) ◽  
pp. 44-51 ◽  
Author(s):  
Peter Greasley

It has been estimated that graphology is used by over 80% of European companies as part of their personnel recruitment process. And yet, after over three decades of research into the validity of graphology as a means of assessing personality, we are left with a legacy of equivocal results. For every experiment that has provided evidence to show that graphologists are able to identify personality traits from features of handwriting, there are just as many to show that, under rigorously controlled conditions, graphologists perform no better than chance expectations. In light of this confusion, this paper takes a different approach to the subject by focusing on the rationale and modus operandi of graphology. When we take a closer look at the academic literature, we note that there is no discussion of the actual rules by which graphologists make their assessments of personality from handwriting samples. Examination of these rules reveals a practice founded upon analogy, symbolism, and metaphor in the absence of empirical studies that have established the associations between particular features of handwriting and personality traits proposed by graphologists. These rules guide both popular graphology and that practiced by professional graphologists in personnel selection.


Forests ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 772
Author(s):  
Bryce Frank ◽  
Vicente J. Monleon

The estimation of the sampling variance of point estimators under two-dimensional systematic sampling designs remains a challenge, and several alternative variance estimators have been proposed in the past few decades. In this work, we compared six alternative variance estimators under Horvitz-Thompson (HT) and post-stratification (PS) point estimation regimes. We subsampled a multitude of species-specific forest attributes from a large, spatially balanced national forest inventory to compare the variance estimators. A variance estimator that assumes a simple random sampling design exhibited positive relative bias under both HT and PS point estimation regimes ranging between 1.23 to 1.88 and 1.11 to 1.78 for HT and PS, respectively. Alternative estimators reduced this positive bias with relative biases ranging between 1.01 to 1.66 and 0.90 to 1.64 for HT and PS, respectively. The alternative estimators generally obtained improved efficiencies under both HT and PS, with relative efficiency values ranging between 0.68 to 1.28 and 0.68 to 1.39, respectively. We identified two estimators as promising alternatives that provide clear improvements over the simple random sampling estimator for a wide variety of attributes and under HT and PS estimation regimes.


2020 ◽  
Vol 499 (4) ◽  
pp. 4905-4917
Author(s):  
S Contreras ◽  
R E Angulo ◽  
M Zennaro ◽  
G Aricò ◽  
M Pellejero-Ibañez

ABSTRACT Predicting the spatial distribution of objects as a function of cosmology is an essential ingredient for the exploitation of future galaxy surveys. In this paper, we show that a specially designed suite of gravity-only simulations together with cosmology-rescaling algorithms can provide the clustering of dark matter, haloes, and subhaloes with high precision. Specifically, with only three N-body simulations, we obtain the power spectrum of dark matter at z = 0 and 1 to better than 3 per cent precision for essentially all currently viable values of eight cosmological parameters, including massive neutrinos and dynamical dark energy, and over the whole range of scales explored, 0.03 < $k/{h}^{-1}\, {\rm Mpc}^{-1}$ < 5. This precision holds at the same level for mass-selected haloes and for subhaloes selected according to their peak maximum circular velocity. As an initial application of these predictions, we successfully constrain Ωm, σ8, and the scatter in subhalo-abundance-matching employing the projected correlation function of mock SDSS galaxies.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3729 ◽  
Author(s):  
Shuai Wang ◽  
Hua-Yan Sun ◽  
Hui-Chao Guo ◽  
Lin Du ◽  
Tian-Jian Liu

Global registration is an important step in the three-dimensional reconstruction of multi-view laser point clouds for moving objects, but the severe noise, density variation, and overlap ratio between multi-view laser point clouds present significant challenges to global registration. In this paper, a multi-view laser point cloud global registration method based on low-rank sparse decomposition is proposed. Firstly, the spatial distribution features of point clouds were extracted by spatial rasterization to realize loop-closure detection, and the corresponding weight matrix was established according to the similarities of spatial distribution features. The accuracy of adjacent registration transformation was evaluated, and the robustness of low-rank sparse matrix decomposition was enhanced. Then, the objective function that satisfies the global optimization condition was constructed, which prevented the solution space compression generated by the column-orthogonal hypothesis of the matrix. The objective function was solved by the Augmented Lagrange method, and the iterative termination condition was designed according to the prior conditions of single-object global registration. The simulation analysis shows that the proposed method was robust with a wide range of parameters, and the accuracy of loop-closure detection was over 90%. When the pairwise registration error was below 0.1 rad, the proposed method performed better than the three compared methods, and the global registration accuracy was better than 0.05 rad. Finally, the global registration results of real point cloud experiments further proved the validity and stability of the proposed method.


2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


1973 ◽  
Vol 3 (4) ◽  
pp. 495-500 ◽  
Author(s):  
James A. Moore ◽  
Carl A. Budelsky ◽  
Richard C. Schlesinger

A new competition index, modified Area Potentially Available (APA), was tested in a complex unevenaged stand composed of 19 different hardwood species. APA considers tree size, spatial distribution, and distance relationships in quantifying intertree competition and exhibits a strong correlation with individual tree basal area growth. The most important characteristic of APA is its potential for evaluating silvicultural practices.


Sign in / Sign up

Export Citation Format

Share Document