scholarly journals Mineral resource modelling using an unequal sampling pattern: An improved practice based on factorization techniques

Author(s):  
D. Orynbassar ◽  
N. Madani

This work addresses the problem of geostatistical simulation of cross-correlated variables by factorization approaches in the case when the sampling pattern is unequal. A solution is presented, based on a Co-Gibbs sampler algorithm, by which the missing values can be imputed. In this algorithm, a heterotopic simple cokriging approach is introduced to take into account the cross-dependency of the undersampled variable with the secondary variable that is more available over the entire region. A real gold deposit is employed to test the algorithm. The imputation results are compared with other Gibbs sampler techniques for which simple cokriging and simple kriging are used. The results show that heterotopic simple cokriging outperforms the other two techniques. The imputed values are then employed for the purpose of resource estimation by using principal component analysis (PCA) as a factorization technique, and the output compared with traditional factorization approaches where the heterotopic part of the data is removed. Comparison of the results of these two techniques shows that the latter leads to substantial losses of important information in the case of an unequal sampling pattern, while the former is capable of reproducing better recovery functions.

Talanta ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. 172-178 ◽  
Author(s):  
I STANIMIROVA ◽  
M DASZYKOWSKI ◽  
B WALCZAK

Water ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1920 ◽  
Author(s):  
Sharma ◽  
Kannan ◽  
Cook ◽  
Pokhrel ◽  
McKenzie

Most of the recent studies on the consequences of extreme weather events on crop yields are focused on droughts and warming climate. The knowledge of the consequences of excess precipitation on the crop yield is lacking. We attempted to fill this gap by estimating reductions in rainfed grain sorghum yields for excess precipitation. The historical grain sorghum yield and corresponding historical precipitation data are collected by county. These data are sorted based on length of the record and missing values and arranged for the period 1973–2003. Grain sorghum growing periods in the different parts of Texas is estimated based on the east-west precipitation gradient, north-south temperature gradient, and typical planting and harvesting dates in Texas. We estimated the growing season total precipitation and maximum 4-day total precipitation for each county growing rainfed grain sorghum. These two parameters were used as independent variables, and crop yields of sorghum was used as the dependent variable. We tried to find the relationships between excess precipitation and decreases in crop yields using both graphical and mathematical relationships. The result were analyzed in four different levels; 1. Storm by storm consequences on the crop yield; 2. Growing season total precipitation and crop yield; 3. Maximum 4-day precipitation and crop yield; and 4. Multiple linear regression of independent variables with and without a principal component analysis (to remove the correlations between independent variables) and the dependent variable. The graphical and mathematical results show decreases in rainfed sorghum yields in Texas for excess precipitation could be between 18% and 38%.


1994 ◽  
Vol 77 (5) ◽  
pp. 1318-1325 ◽  
Author(s):  
Christa Hartmann ◽  
Desiré L Massart

Abstract The use of a plot originally proposed by Bland and Altman (1986, Lancet 8,307-310) for the comparison of 2 clinical measurement methods was investigated and compared with a new visual display based on principal component analysis. The characteristics of both methods are demonstrated for several computer-simulated situations. For visual comparison of 2 methods, it is recommended to use the 2 methods simultaneously, together with a plot of the results of method 2 against method 1.


2020 ◽  
Vol 82 (12) ◽  
pp. 2711-2724 ◽  
Author(s):  
Pezhman Kazemi ◽  
Jaume Giralt ◽  
Christophe Bengoa ◽  
Armin Masoumian ◽  
Jean-Philippe Steyer

Abstract Because of the static nature of conventional principal component analysis (PCA), natural process variations may be interpreted as faults when it is applied to processes with time-varying behavior. In this paper, therefore, we propose a complete adaptive process monitoring framework based on incremental principal component analysis (IPCA). This framework updates the eigenspace by incrementing new data to the PCA at a low computational cost. Moreover, the contribution of variables is recursively provided using complete decomposition contribution (CDC). To impute missing values, the empirical best linear unbiased prediction (EBLUP) method is incorporated into this framework. The effectiveness of this framework is evaluated using benchmark simulation model No. 2 (BSM2). Our simulation results show the ability of the proposed approach to distinguish between time-varying behavior and faulty events while correctly isolating the sensor faults even when these faults are relatively small.


2017 ◽  
Author(s):  
Paul Bastide ◽  
Cécile Ané ◽  
Stéphane Robin ◽  
Mahendra Mariadassou

AbstractTo study the evolution of several quantitative traits, the classical phylogenetic comparative framework consists of a multivariate random process running along the branches of a phylogenetic tree. The Ornstein-Uhlenbeck (OU) process is sometimes preferred to the simple Brownian Motion (BM) as it models stabilizing selection toward an optimum. The optimum for each trait is likely to be changing over the long periods of time spanned by large modern phylogenies. Our goal is to automatically detect the position of these shifts on a phylogenetic tree, while accounting for correlations between traits, which might exist because of structural or evolutionary constraints. We show that, in the presence shifts, phylogenetic Principal Component Analysis (pPCA) fails to decorrelate traits efficiently, so that any method aiming at finding shift needs to deal with correlation simultaneously. We introduce here a simplification of the full multivariate OU model, named scalar OU (scOU), which allows for noncausal correlations and is still computationally tractable. We extend the equivalence between the OU and a BM on a re-scaled tree to our multivariate framework. We describe an Expectation Maximization algorithm that allows for a maximum likelihood estimation of the shift positions, associated with a new model selection criterion, accounting for the identifiability issues for the shift localization on the tree. The method, freely available as an R-package (PhylogeneticEM) is fast, and can deal with missing values. We demonstrate its efficiency and accuracy compared to another state-of-the-art method (ℓ1ou) on a wide range of simulated scenarios, and use this new framework to re-analyze recently gathered datasets on New World Monkeys and Anolis lizards.


Author(s):  
MohammadHossein GhojehBeyglou

AbstractPorosity is one of the main variables needed for reservoir characterization. For this volumetric variable, there are many methods to simulate the spatial distribution. In this article, porosity was analyzed and modeled in the local and global distribution. For simulation, Sequential Gaussian simulation (SGS) and Gaussian Random Function (GRFS) were applied. Also, kriging was used to estimate the porosity at specific locations. The main purpose of this work was to investigate the porosity to compare geostatistical simulation and estimation methods in a sandstone reservoir as a real case study. First, the data sets were normalized by the Normal Scores Transformation (NST) and stratigraphic coordinate. The model of experimental variograms was fitted in the vertical and horizontal directions. For the simulation methods, 10 realizations were generated by each method. The Q-Q plots were calculated, and both sets of quintiles (Target Porosity Distribution versus Porosity realization) came from normal distributions with the following correlation coefficients: 0.93, 0.94 and 0.97 related to GRFS, SGS and Kriging, respectively. The extracted variograms from realizations showed that the kriging couldn’t reproduce the variograms with global distribution. For local validation, the cross-validation was evaluated and three wells were omitted. The re-estimation of porosity was considered at located well logs through the well sections window where the kriging had a better performance with minimum error to estimate porosity locally. Finally, the cross-sectional models were generated by each algorithm which showed that the simple kriging tries to produce smoother distribution, whereas conditional simulations (SGS and GRFS) try to represent more global-detailed sections.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2565 ◽  
Author(s):  
Daniela Pauliuc ◽  
Florina Dranca ◽  
Mircea Oroian

The aim of this study was to authenticate five types of Romanian honey (raspberry, rape, thyme, sunflower and mint) using a voltammetric tongue (VE tongue) technique. For the electronic tongue system, six electrodes (silver, gold, platinum, glass, zinc oxide and titanium dioxide) were used. The results of the melissopalynological analysis were supplemented by the data obtained with the electronic voltammetric tongue system. The results were interpreted by means of principal component analysis (PCA) and linear discriminant analysis (LDA). In this way, the usefulness of the working electrodes was compared for determining the botanical origin of the honey samples. The electrodes of titanium dioxide, zinc oxide, and silver were more useful, as the results obtained with these electrodes showed that it was achieved a better classification of honey according to its botanical origin. The comparison of the results of the electronic voltammetric tongue technique with those obtained by melissopalynological analysis showed that the technique was able to accurately classify 92.7% of the original grouped cases. The similarity of results confirmed the ability of the electronic voltammetric tongue technique to perform a rapid characterization of honey samples, which complements its advantages of being an easy-to-use and cheap method of analysis.


2021 ◽  
pp. geochem2021-013
Author(s):  
Erkan Yılmazer ◽  
Murat Kavurmaci ◽  
Sercan Bozan

In this study, a gold exploration index (GEI) that reduces financial expenditures and time losses during exploration studies has been developed using the Analytical Hierarchy Process (AHP) in a region where a high sulfidation epithermal Au deposit exists. The GEI can be used to predict the location of the target element by evaluating the maps obtained from related element distributions together with a GEI-based prediction map. The hierarchical structure of the index has been established based on geochemistry of the rock samples. The elements used in the design of the hierarchical structure are arsenic (As), silver (Ag), antimony (Sb), copper (Cu), manganese (Mn), lead (Pb) and zinc (Zn), which are determined by the correlation analysis and experts’ opinions. The efficiency scores of the alternatives were converted into prediction maps called GEI-based anomaly distribution maps. They were compared with the maps derived from both GIS-based overlay analysis of the rock samples and spatial gold distribution. The efficiency scores of the alternatives in these maps were categorized into three groups as "high,” "medium," and "weak" in terms of gold potential. Comparison of the results with those derived using Principal Component (PCA), Weighted Total (WS) and Weighted Product Models (WPM) methods showed that the produced index yields reliable information that can be used to determine where gold enrichment occurs, especially in high sulfidation epithermal environments.Supplementary material:https://doi.org/10.6084/m9.figshare.c.5443218


Sign in / Sign up

Export Citation Format

Share Document