Materials Properties: Heterogeneity and Appropriate Sampling Modes

2015 ◽  
Vol 98 (2) ◽  
pp. 269-274 ◽  
Author(s):  
Kim H Esbensen

Abstract The target audience for this Special Section comprises parties related to the food and feed sectors, e.g., field samplers, academic and industrial scientists, laboratory personnel, companies, organizations, regulatory bodies, and agencies who are responsible for sampling, as well as project leaders, project managers, quality managers, supervisors, and directors. All these entities face heterogeneous materials,and the characteristics of heterogeneous materials needs to be competently understood by all of them. Before delivering analytical results for decision-making, one form or other of primary sampling is always necessary, which must counteract the effects of the sampling target heterogeneity. Up to five types of sampling error may arise as a specific sampling process interacts with a heterogeneous material; two sampling errors arise because of the heterogeneity of the sampling target, and three additional sampling errors are produced by the sampling process itself—if not properly understood, reduced, and/or eliminated, which is the role of Theory of Sampling. Thispaper discusses the phenomenon and concepts involvedin understanding, describing, and managing the adverse effects of heterogeneity in sampling.

1988 ◽  
Vol 78 (1) ◽  
pp. 155-161 ◽  
Author(s):  
J. Van Sickle

AbstractSeveral published reports have presented estimates of the rate of increase, r, based on sampled ovarian age distributions from Glossina populations throughout Africa. These estimates are invalid, because an age distribution sampled at one point in time can be equated to a survivorship curve only if r = 0. When such a survivorship curve and a corresponding fecundity schedule are then used to estimate r via the Euler-Lotka equation, the result is a value of r near zero, regardless of the population's true rate of increase. Synthetic sampling from a hypothetical tsetse population confirmed that estimates computed in this fashion are entirely the products of sampling error. Valid estimates of r can sometimes be obtained from an age distribution, using an alternative method, but such estimates are highly sensitive to sampling errors in the distribution.


2019 ◽  
Vol 148 (3) ◽  
pp. 1229-1249 ◽  
Author(s):  
Tobias Necker ◽  
Martin Weissmann ◽  
Yvonne Ruckstuhl ◽  
Jeffrey Anderson ◽  
Takemasa Miyoshi

Abstract State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.


Minerals ◽  
2019 ◽  
Vol 9 (4) ◽  
pp. 238
Author(s):  
Dominy ◽  
Glass ◽  
O’Connor ◽  
Lam ◽  
Purevgerel

Grade control aims to deliver adequately defined tonnes of ore to the process plant. The foundation of any grade control programme is collecting high-quality samples within a geological context. The requirement for quality samples has long been recognised, in that these should be representative and fit-for-purpose. Correct application of the Theory of Sampling reduces sampling errors across the grade control process, in which errors can propagate from sample collection through sample preparation to assay results. This contribution presents three case studies which are based on coarse gold-dominated orebodies. These illustrate the challenges and potential solutions to achieve representative sampling and build on the content of a previous publication. Solutions ranging from bulk samples processed through a plant to whole-core sampling and assaying using bulk leaching, are discussed. These approaches account for the nature of the mineralisation, where extreme gold particle-clustering effects render the analysis of small-scale samples highly unrepresentative. Furthermore, the analysis of chip samples, which generally yield a positive bias due to over-sampling of quartz vein material, is discussed.


2013 ◽  
Vol 6 (2) ◽  
pp. 3545-3579 ◽  
Author(s):  
S. Dohe ◽  
V. Sherlock ◽  
F. Hase ◽  
M. Gisi ◽  
J. Robinson ◽  
...  

Abstract. The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-average dry air mole fractions of key greenhouse gases. To ensure the network wide consistency, biases between Fourier Transform spectrometers at different sites have to be well controlled. In this study we investigate a fundamental correction scheme for errors in the sampling of the interferogram. This is a two-step procedure in which the laser sampling error (LSE) is quantified using a subset of suitable interferograms and then used to resample all the interferograms in the timeseries. Timeseries of measurements acquired at the TCCON sites Izaña and Lauder are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions. Estimated LSE are in good agreement with sampling errors inferred from lamp measurements of the ghost to parent ratio (Lauder). The LSE introduce retrieval biases which are minimised when the interferograms are resampled. The original timeseries of Xair and XCO2 at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling discrepancies are reduced to 0.1% at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also contribute to the residual difference.


2020 ◽  
Vol 142 (9) ◽  
Author(s):  
M. Chilla ◽  
G. Pullan ◽  
S. Gallimore

Abstract The effects of blade row interactions on stator-mounted instrumentation in axial compressors are investigated using unsteady numerical calculations. The test compressor is an eight-stage machine representative of an aero-engine core compressor. For the unsteady calculations, a 180-deg sector (half-annulus) model of the compressor is used. It is shown that the time-mean flow field in the stator leading edge planes is circumferentially nonuniform. The circumferential variations in stagnation pressure and stagnation temperature, respectively, reach 4.2% and 1.1% of the local mean. Using spatial wave number analysis, the incoming wakes from the upstream stator rows are identified as the dominant source of the circumferential variations in the front and middle of the compressor, while toward the rear of the compressor, the upstream influence of the eight struts in the exit duct becomes dominant. Based on three circumferential probes, the sampling errors for stagnation pressure and stagnation temperature are calculated as a function of the probe locations. Optimization of the probe locations shows that the sampling error can be reduced by up to 77% by circumferentially redistributing the individual probes. The reductions in the sampling errors translate to reductions in the uncertainties of the overall compressor efficiency and inlet flow capacity by up to 50%. Recognizing that data from large-scale unsteady calculations are rarely available in the instrumentation phase for a new test rig or engine, a method for approximating the circumferential variations with single harmonics is presented. The construction of the harmonics is based solely on the knowledge of the number of stators in each row and a small number of equispaced probes. It is shown how excursions in the sampling error are reduced by increasing the number of circumferential probes.


Author(s):  
M. D. Pandey ◽  
H. J. Sutherland

Robust estimation of wind turbine design loads for service lifetimes of 30 to 50 years that are based on field measurements of a few days is a challenging problem. Estimating the long-term load distribution involves the integration of conditional distributions of extreme loads over the mean wind speed and turbulence intensity distributions. However, the accuracy of the statistical extrapolation is fairly sensitive to both model and sampling errors. Using measured inflow and structural data from the LIST program, this paper presents a comparative assessment of extreme loads using three distributions: namely, the Gumbel, Weibull and Generalized Extreme Value distributions. The paper uses L-moments, in place of traditional product moments, to reduce the sampling error. The paper discusses the application of extreme value theory and highlights its practical limitations. The proposed technique has the potential of improving estimates of the design loads for wind turbines.


2003 ◽  
Vol 125 (4) ◽  
pp. 531-540 ◽  
Author(s):  
M. D. Pandey ◽  
H. J. Sutherland

The robust estimation of wind turbine design loads for service lifetimes of 30 to 50 years that are based on limited field measurements is a challenging problem. Estimating the long-term load distribution involves the integration of conditional distributions of extreme loads over the mean wind speed and turbulence intensity distributions. However, the accuracy of the statistical extrapolation can be sensitive to both model and sampling errors. Using measured inflow and structural data from the Long Term Inflow and Structural Test (LIST) program, this paper presents a comparative assessment of extreme loads using three distributions: namely, the Gumbel, Weibull and Generalized Extreme Value distributions. The paper uses L-moments, in place of traditional product moments, with the purpose of reducing the sampling error. The paper discusses the effects of modeling and sampling errors and highlights the practical limitations of extreme value theory.


2015 ◽  
Vol 98 (2) ◽  
pp. 259-263 ◽  
Author(s):  
Nancy Thiex ◽  
Claudia Paoletti ◽  
Kim H Esbensen

Abstract International acceptance of data is a much-desired wish in many sectors to ensure equal standards forvalid information and data exchange, facilitate trade, support food safety regulation, and promote reliable communication among all parties involved. However, this cannot be accomplished without a harmonized approach to sampling and a joint approach to assess the practical sampling protocols used. Harmonizationbased on a nonrepresentative protocol, or on a restricted terminology tradition forced upon other sectors would negate any constructive outcome. An international discussion on a harmonized approach to sampling is severely hampered by a plethora of divergent sampling definitions and terms. Different meanings forthe same term are frequently used by the different sectors, and even within one specific sector. In other cases, different terms are used for the same concept. Before efforts to harmonize can be attempted, itis essential that all stakeholders can at least communicate effectively in this context. Therefore, a clear understanding of the main vocabularies becomes an essential prerequisite. As a first step, commonalities and dichotomies in terminology are here broughtto attention by providing a comparative summary of the terminology as defined by the Theory of Sampling(TOS) and those in current use by the International Organization for Standardization, the World Health Organization, the Food and Agriculture Organization Codex Alimentarius, and the U.S. Food and Drug Administration. Terms having contradictory meaning to the TOS are emphasized. To the degree possible, we present a successful resolution of some of the most important issues outlined, sufficient to support the objectives of the present Special Section.


2012 ◽  
Vol 140 (9) ◽  
pp. 3078-3089 ◽  
Author(s):  
Jeffrey S. Whitaker ◽  
Thomas M. Hamill

Abstract Inflation of ensemble perturbations is employed in ensemble Kalman filters to account for unrepresented error sources. The authors propose a multiplicative inflation algorithm that inflates the posterior ensemble in proportion to the amount that observations reduce the ensemble spread, resulting in more inflation in regions of dense observations. This is justified since the posterior ensemble variance is more affected by sampling errors in these regions. The algorithm is similar to the “relaxation to prior” algorithm proposed by Zhang et al., but it relaxes the posterior ensemble spread back to the prior instead of the posterior ensemble perturbations. The new inflation algorithm is compared to the method of Zhang et al. and simple constant covariance inflation using a two-level primitive equation model in an environment that includes model error. The new method performs somewhat better, although the method of Zhang et al. produces more balanced analyses whose ensemble spread grows faster. Combining the new multiplicative inflation algorithm with additive inflation is found to be superior to either of the methods used separately. Tests with large and small ensembles, with and without model error, suggest that multiplicative inflation is better suited to account for unrepresented observation-network-dependent assimilation errors such as sampling error, while model errors, which do not depend on the observing network, are better treated by additive inflation. A combination of additive and multiplicative inflation can provide a baseline for evaluating more sophisticated stochastic treatments of unrepresented background errors. This is demonstrated by comparing the performance of a stochastic kinetic energy backscatter scheme with additive inflation as a parameterization of model error.


2020 ◽  
Vol 4 (4) ◽  
pp. 98 ◽  
Author(s):  
Seong-Woong Choi ◽  
Yong-Seok Kim ◽  
Young-Jin Yum ◽  
Soon-Yong Yang

The post-processing (punching or trimming) of high-strength parts reinforced by hot stamping requires punch molds with improved mechanical properties in hardness, resistance to wear, and toughness. In this study, a semi-additive manufacturing (semi-AM) method of heterogeneous materials was proposed to strengthen these properties using high wear resistance steel (HWS) powder and directed energy deposition (DED) technology. To verify these mechanical properties as a material for the punch mold for cutting, specimens were prepared and tested by a semi-AM method of heterogeneous material. The test results of the HWS additive material by the semi-AM method proposed in this study are as follows: the hardness was 60.59–62.0 HRc, which was like the Bulk D2 specimen. The wear resistance was about 4.2 times compared to that of the D2 specimen; the toughness was about 4.0 times that of the bulk D2 specimen; the compressive strength was about 1.45 times that of the bulk D2 specimen; the true density showed 100% with no porosity. Moreover, the absorption energy was 59.0 J in a multi-semi-AM specimen of heterogeneous materials having an intermediate buffer layer (P21 powder material). The semi-AM method of heterogeneous materials presented in this study could be applied as a method to strengthen the punch mold for cutting. In addition, the multi-semi-AM method of heterogeneous materials will be able to control the mechanical properties of the additive material.


Sign in / Sign up

Export Citation Format

Share Document