scholarly journals Two-stage sector sampling for estimating small woodlot attributes

2011 ◽  
Vol 41 (9) ◽  
pp. 1819-1826 ◽  
Author(s):  
Piermaria Corona ◽  
Lorenzo Fattorini ◽  
Sara Franceschi

A two-stage sampling strategy is proposed to assess small woodlots outside the forests scattered on extensive territories. The first stage is performed to select a sample of small woodlots using fixed-size sampling schemes, and the second stage is performed to sample trees within woodlots selected at first stage. Usually, fixed- or variable-area plots are adopted to sample trees. However, the use of plot sampling in small patches such as woodlots is likely to induce a relevant amount of bias owing to edge effects. In this framework, sector sampling proves to be particularly effective. The present paper investigates the statistical properties of two-stage sampling strategies for estimating forest attributes of woodlot populations when sector sampling is adopted at the second stage. A two-stage estimator of population totals is derived together with a conservative estimator of its sampling variance. By means of a simulation study, the performance of the proposed estimator is checked and compared with that achieved using traditional plot sampling with edge corrections. Simulation results prove the adequacy of sector sampling and provide some guidelines for the effective planning of the strategy. In some countries, the proposed strategy can be performed with few modifications within the framework of large-scale forest inventories.

2019 ◽  
pp. 173-199
Author(s):  
David G. Hankin ◽  
Michael S. Mohr ◽  
Ken B. Newman

In multi-stage sampling, there are two or more stages of sampling and the simplest version, which the chapter emphasizes is called two-stage sampling. In two-stage sampling, an initial first-stage sample of n primary units (or clusters) is selected. Then, at the second stage of sampling, m i subunits are selected from the M i subunits in the selected primary units. First- and second-stage units may be selected with equal or unequal probabilities and a wide variety of estimators may be used to estimate totals within selected primary units and to estimate the total of the target variable in the finite population. Illustrative sample spaces are provided for equal sized two-stage cluster sampling with SRS selection at both stages, and for two-stage unequal size cluster sampling, with clusters selected by PPSWOR and units within clusters selected by SRS. Sampling variance is shown to originate from two sources: variation between primary unit totals or means (first-stage variance), and errors of estimation of primary units totals (second-stage variance). Topics of optimal allocation and net relative efficiency are addressed in the two-stage context with equal and unequal size clusters. General expressions for sampling variance are presented for three or more stages of sampling. The multi-stage framework can take powerful advantage of all of the concepts and sampling designs considered in previous chapters and the ecologist or natural resource scientist can apply everything he/she knows about an ecological or natural resource setting to guide development of an intelligent multi-stage sampling strategy.


Author(s):  
Lu Chen ◽  
Handing Wang ◽  
Wenping Ma

AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.


Author(s):  
Rui Qiu ◽  
Yongtu Liang

Abstract Currently, unmanned aerial vehicle (UAV) provides the possibility of comprehensive coverage and multi-dimensional visualization of pipeline monitoring. Encouraged by industry policy, research on UAV path planning in pipeline network inspection has emerged. The difficulties of this issue lie in strict operational requirements, variable flight missions, as well as unified optimization for UAV deployment and real-time path planning. Meanwhile, the intricate structure and large scale of the pipeline network further complicate this issue. At present, there is still room to improve the practicality and applicability of the mathematical model and solution strategy. Aiming at this problem, this paper proposes a novel two-stage optimization approach for UAV path planning in pipeline network inspection. The first stage is conventional pre-flight planning, where the requirement for optimality is higher than calculation time. Therefore, a mixed integer linear programming (MILP) model is established and solved by the commercial solver to obtain the optimal UAV number, take-off location and detailed flight path. The second stage is re-planning during the flight, taking into account frequent pipeline accidents (e.g. leaks and cracks). In this stage, the flight path must be timely rescheduled to identify specific hazardous locations. Thus, the requirement for calculation time is higher than optimality and the genetic algorithm is used for solution to satisfy the timeliness of decision-making. Finally, the proposed method is applied to the UAV inspection of a branched oil and gas transmission pipeline network with 36 nodes and the results are analyzed in detail in terms of computational performance. In the first stage, compared to manpower inspection, the total cost and time of UAV inspection is decreased by 54% and 56% respectively. In the second stage, it takes less than 1 minute to obtain a suboptimal solution, verifying the applicability and superiority of the method.


2020 ◽  
Author(s):  
Bramka Arga Jafino ◽  
Jan Kwakkel

<p>Climate-related inequality can arise from the implementation of adaptation policies. As an example, the dike expansion policy for protecting rice farmers in the Vietnam Mekong Delta in the long run backfires to the small-scale farmers. The prevention of annual flooding reduces the supply of natural sediments, forcing farmers to apply more and more fertilizers to achieve the same yield. While large-scale farmers can afford this, small-scale farmers do not possess the required economics of scale and are thus harmed eventually. Together with climatic and socioeconomic uncertainties, the implementation of new policies can not only exacerbate existing inequalities, but also induce new inequalities. Hence, distributional impacts to affected stakeholders should be assessed in climate change adaptation planning.</p><p>In this study, we propose a two-stage approach to assess the distributional impacts of policies in model-based support for adaptation planning. The first stage is intended to explore potential inequality patterns that may emerge due to combination of new policies and the realization of exogenous scenarios. This stage comprises four steps: (i) disaggregation of performance indicators in the model in order to observe distributional impacts, (ii) performance of large-scale simulation experimentation to account for deep uncertainties, (iii) clustering of simulation results to identify distinctive inequality patterns, and (iv) application of scenario discovery tools, in particular classification and regression trees, to identify combinations of policies and uncertainties that lead to a specific inequality pattern.</p><p>In the second stage we attempt to asses which policies are morally preferable with respect to the inequality patterns they generate, rather than only descriptively explore the patterns which is the case in the previous stage. To perform a normative evaluation of the distributional impacts, we operationalize five alternative principles of justice: improvement of total welfare (utilitarianism), prioritization of worse-off actors (prioritarianism), reduction of welfare differences across actors (two derivations: absolute inequality and envy measure), and improvement of worst-off actor (Rawlsian difference). The different operationalization of each of these principles forms the so-called social welfare function with which the distributional impacts can be aggregated.</p><p>To test this approach, we use an agricultural planning case study in the upper Vietnam Mekong Delta. Specifically, we assess the distributional impacts of alternative adaptation policies in the upper Vietnam Mekong Delta by using an integrated assessment model. We consider six alternative policies as well as uncertainties related to upstream discharge, sediment supply, and land-use change. Through the first stage, we identify six potential inequality patterns among the 23 districts in the study area, as well as the combinations of policies and uncertainties that result in these types of patterns. From applying the second stage we obtain complete rankings of alternative policies, based on their performance with respect to distributional impacts, under different realizations of scenarios. The explorative stage allows policy-makers to identify potential actions to compensate worse-off actors while the normative stage helps them to easily rank alternative policies based on a preferred moral principle.</p>


2012 ◽  
Vol 42 (10) ◽  
pp. 1865-1871 ◽  
Author(s):  
Daniel Mandallaz ◽  
Alexander Massey

In the context of Poisson sampling, numerous adjustments to classical estimators have been proposed that are intended to compensate for inflated variance due to random sample size. However, such adjustments have never been applied to extensive forest inventories. This work investigates the performances of four estimators for the timber volume in one-phase two-stage forest inventories, where trees in the first stage are selected, at the plot level, by concentric circles or angle-count methods and a subset thereof are selected by Poisson sampling for further measurements to get a better estimation. The original two-stage estimator is the sum of two components: the first is the mean of Horwitz–Thompson estimators using simple volume approximations, based on diameter and species alone, of all first-stage trees in each inventory plot, and the second is the mean of Horwitz–Thompson estimators based on the differences between the simple volume approximations and refined volume determinations based on further diameter and height measurements on the second-stage trees within each inventory plot. This two-stage estimator is particularly useful because it provides unbiased estimates even if the simple prediction model is not correct, which is particularly important for small area estimation. The other three estimators rely on adjustments of the second component of the original estimator that are adapted from estimators proposed in the literature by L.R. Grosenbaugh and C.-E. Särndal. It turns out that these adjustments introduce a negligible bias and that the original simple estimator performs just as well or even better than the new estimators with respect to the variance.


Author(s):  
Chengyu Peng ◽  
Hong Cheng ◽  
Manchor Ko

There are a large number of methods for solving under-determined linear inverse problems. For large-scale optimization problem, many of them have very high time complexity. We propose a new method called two-stage sparse representation (TSSR) to tackle it. We decompose the representing space of signals into two parts”, the measurement dictionary and the sparsifying basis. The dictionary is designed to obey or nearly obey the sub-Gaussian distribution. The signals are then encoded on the dictionary to obtain the training and testing coefficients individually in the first stage. Then, we design the basis based on the training coefficients to approach an identity matrix, and we apply sparse coding to the testing coefficients over the basis in the second stage. We verify that the projection of testing coefficients onto the basis is a good approximation of the original signals onto the representing space. Since the projection is conducted on a much sparser space, the runtime is greatly reduced. For concrete realization, we provide an instance for the proposed TSSR. Experiments on four biometric databases show that TSSR is effective compared to several classical methods for solving linear inverse problem.


2021 ◽  
Author(s):  
Andrea Contina ◽  
Sarah Magozzi ◽  
Hannah B. Vander Zanden ◽  
Gabriel Bowen ◽  
Michael B. Wunder

The recognition of adequate sampling designs is an interdisciplinary topic that has gained popularity over the last decades. In ecology, many research questions involve sampling across extensive and complex environmental gradients. This is the case of stable isotope analyses, which are widely used to characterize large-scale movement patterns and dietary preferences of organisms across taxa. Because natural-abundance stable isotope variation in the environment is incorporated into inert animal tissues, such as feathers or hair, it is possible to draw inferences about the type of food and water resources that individuals consumed and the locations where tissues were synthesized. However, modern stable isotope research can benefit from the implementation of robust statistical analyses and well-designed sampling approaches to improve geographic assignment interpretation. We employed hydrogen stable isotope simulations to study inferences regarding the probability of origin of migratory individuals and reveal gaps in sampling efforts while highlighting uncertainties of assignment model extrapolations. We present an integrative approach that explores multiple sampling strategies across species with different geographic ranges to understand advantages and limitations of animal movement inferences based on stable isotope data. We show the characteristics of different sampling strategies through geographic and isotopic gradients and establish a set of diagnostic tools that uncover the attributes of these gradients and evaluate uncertainties of model results. Our analysis demonstrates that sampling regimes should be evaluated in relation to specific research questions and study constraints, and that adopting a single method across species ranges can lead to a costly but less effective sampling strategy.


Author(s):  
Yukihiro Hamasuna ◽  
Ryo Ozaki ◽  
Yasunori Endo ◽  
◽  
◽  
...  

To handle a large-scale object, a two-stage clustering method has been previously proposed. The method generates a large number of clusters during the first stage and merges clusters during the second stage. In this paper, a novel two-stage clustering method is proposed by introducing cluster validity measures as the merging criterion during the second stage. The significant cluster validity measures used to evaluate cluster partitions and determine the suitable number of clusters act as the criteria for merging clusters. The performance of the proposed method based on six typical indices is compared with eight artificial datasets. These experiments show that a trace of the fuzzy covariance matrixWtrand its kernelizationKWtrare quite effective when applying the proposed method, and obtain better results than the other indices.


2017 ◽  
Vol 8 (2) ◽  
pp. 471-476 ◽  
Author(s):  
J. Arnó ◽  
J.A. Martínez-Casasnovas ◽  
A. Uribeetxebarria ◽  
A. Escolà ◽  
J.R. Rosell-Polo

Different sampling schemes were tested to estimate yield (kg/tree), fruit firmness (kg) and the refractometric index (°Baumé) in a peach orchard. In contrast to simple random sampling (SRS), the use of auxiliary information (NDVI and apparent electrical conductivity, ECa) allowed sampling points to be stratified according to two or three classes (strata) within the plot. Sampling schemes were compared in terms of accuracy and efficiency. Stratification of samples improved efficiency compared to SRS. However, yield and quality parameters may require different sampling strategies. While yield was better estimated using stratified samples based on the ECa, fruit quality (firmness and °Baumé) showed better results when stratifying by NDVI.


2021 ◽  
Vol 11 (23) ◽  
pp. 11240
Author(s):  
Jun-Hee Han ◽  
Ju-Yong Lee

This study investigates a two-stage assembly-type flow shop with limited waiting time constraints for minimizing the makespan. The first stage consists of m machines fabricating m types of components, whereas the second stage has a single machine to assemble the components into the final product. In the flow shop, the assembly operations in the second stage should start within the limited waiting times after those components complete in the first stage. For this problem, a mixed-integer programming formulation is provided, and this formulation is used to find an optimal solution using a commercial optimization solver CPLEX. As this problem is proved to be NP-hard, various heuristic algorithms (priority rule-based list scheduling, constructive heuristic, and metaheuristic) are proposed to solve a large-scale problem within a short computation time. To evaluate the proposed algorithms, a series of computational experiments, including the calibration of the metaheuristics, were performed on randomly generated problem instances, and the results showed outperformance of the proposed iterated greedy algorithm and simulated annealing algorithm in small- and large-sized problems, respectively.


Sign in / Sign up

Export Citation Format

Share Document