scholarly journals Predicting gold targets using cokriging in SURFER 17

2019 ◽  
Author(s):  
Ricardo A. Valls

Golden Software Inc. included the method of cokriging in the newest version of SURFER 17. This has opened a new tool for interpreting geochemical data. We can use cokriging in SURFER 17 to improve the quality of maps and to predict similar targets in nearby areas. We use cokriging when we want to process data from different datasets. One dataset is always smaller than the other. Here, I first tasted the method with a hypothetical geochemical model combining a smaller dataset of FA gold results with a larger dataset of ICP-MS multi-elements. Later, I applied this method to real data from a soil sampling project in Mozambique. I tested a known mineralized target and also an extended area to predict gold targets. I also had the gold results for the extended area. They allowed me to confirm the effectiveness of cokriging in predicting the new targets. There are many opportunities where we can apply cokriging as a prediction tool. One situation is when an initial sampling returned a group of interesting but isolated gold results. We can then use a cheaper method, like ICP-MS, to better understand the gold distribution in the area.

10.14311/816 ◽  
2006 ◽  
Vol 46 (2) ◽  
Author(s):  
P. Pecherková ◽  
I. Nagy

Success/failure of adaptive control algorithms – especially those designed using the Linear Quadratic Gaussian criterion – depends on the quality of the process data used for model identification. One of the most harmful types of process data corruptions are outliers, i.e. ‘wrong data’ lying far away from the range of real data. The presence of outliers in the data negatively affects an estimation of the dynamics of the system. This effect is magnified when the outliers are grouped into blocks. In this paper, we propose an algorithm for outlier detection and removal. It is based on modelling the corrupted data by a two-component probabilistic mixture. The first component of the mixture models uncorrupted process data, while the second models outliers. When the outlier component is detected to be active, a prediction from the uncorrupted data component is computed and used as a reconstruction of the observed data. The resulting reconstruction filter is compared to standard methods on simulated and real data. The filter exhibits excellent properties, especially in the case of blocks of outliers. 


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-19
Author(s):  
Sreelakshmy I. J. ◽  
Binsu C. Kovoor

Image inpainting is a technique in the world of image editing where missing portions of the image are estimated and filled with the help of available or external information. In the proposed model, a novel hybrid inpainting algorithm is implemented, which adds the benefits of a diffusion-based inpainting method to an enhanced exemplar algorithm. The structure part of the image is dealt with a diffusion-based method, followed by applying an adaptive patch size–based exemplar inpainting. Due to its hybrid nature, the proposed model exceeds the quality of output obtained by applying conventional methods individually. A new term, coefficient of smoothness, is introduced in the model, which is used in the computation of adaptive patch size for the enhanced exemplar method. An automatic mask generation module relieves the user from the burden of creating additional mask input. Quantitative and qualitative evaluation is performed on images from various datasets. The results provide a testimonial to the fact that the proposed model is faster in the case of smooth images. Moreover, the proposed model provides good quality results while inpainting natural images with both texture and structure regions.


Toxics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 133
Author(s):  
Ana Macías-Montes ◽  
Manuel Zumbado ◽  
Octavio P. Luzardo ◽  
Ángel Rodríguez-Hernández ◽  
Andrea Acosta-Dacal ◽  
...  

Dry feed for pets lacks specific legislation regarding maximum residue limits for inorganic elements. The aim of the present study was to determine the content of 43 inorganic elements in dog and cat feed, studying whether there were differences according to the supposed quality of the food and performing the risk assessment for health. Thirty-one and thirty packages of pelleted dry food for cats and dogs, respectively, were analyzed. After acidic microwave-assisted digestion, elements were detected and quantified by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). In general, we did not observe important differences in the content of elements according to the supposed quality of the brand. Among trace elements, selenium and manganese are above the dietary reference value. Arsenic and mercury showed the highest acute hazard indexes, which make them risk factors for the health of dogs and cats. Aluminum, uranium, antimony and vanadium contents were above the toxic reference value and showed the highest acute hazard indexes. It is necessary to improve the legislation regarding the food safety of pets, for their health and to protect the rights of consumers.


2021 ◽  
Vol 15 (4) ◽  
pp. 10-23
Author(s):  
Eka Chandra Ramdhani ◽  
Juniarti Eka Safitri ◽  
Selamat Abdurrahman Fahmi ◽  
Asep Asep

The inventory system is a system that has a very important role in a company. Inventory systems have been widely used or developed in a place with various technologies and systems. Problems at PT. Sanghiang Perkasa is due to the fact that the data has not been stored in a good file and the management and processing of inventory data is still processed in a conventional way, which has a very significant effect on the quality of the data and information produced. The main objective of this research is to produce an inventory system that is powerful and in accordance with the needs of the users associated with the inventory system. The system development method in this inventory system uses the waterfall method which consists of six stages. The stages are System Analysis and Design, software requirements analysis, system design, coding, system testing and maintenance. This system was built using the PHP programming language, DataBase MySQL. It is hoped that with the implementation of this inventory system at PT. Sanghiang Perkasa can make it easier to store and process data and information such as stock-taking data, information on incoming and outgoing goods transactions, purchase and sales return data, managing customer and supplier data to making product stock reports and assembly reports. Keywords: Information System; Inventory, Web


Minerals ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 106
Author(s):  
Xing-Yuan Li ◽  
Jing-Ru Zhang ◽  
Chun-Kit Lai

Jiangxi Province (South China) is one of the world’s top tungsten (W) mineral provinces. In this paper, we present a new LA-ICP-MS zircon U-Pb age and Hf isotope data on the W ore-related Xianglushan granite in northern Jiangxi Province. The magmatic zircon grains (with high Th/U values) yielded an early Cretaceous weighted mean U-Pb age of 125 ± 1 Ma (MSWD = 2.5, 2σ). Zircon εHf(t) values of the Xianglushan granite are higher (−6.9 to −4.1, avg. −5.4 ± 0.7) than those of the W ore-related Xihuanshan granite in southern Jiangxi Province (−14.9 to −11.2, avg. −12.5 ± 0.9), implying different sources between the W ore-forming magmas in the northern and southern Jiangxi Province. Compiling published zircon geochemical data, the oxygen fugacity (fO2) of the late Yanshanian granitic magmas in Jiangxi Province (the Xianglushan, Ehu, Dahutang, and Xihuashan plutons) were calculated by different interpolation methods. As opposed to the W ore-barren Ehu granitic magma, the low fO2 of the Xianglushan granitic magma may have caused W enrichment and mineralization, whilst high fO2 may have led to the coexistence of Cu and W mineralization in the Dahutang pluton. Additionally, our study suggests that the absence of late Mesozoic Cu-Mo mineralization in the Zhejiang, Jiangxi, and Anhui Provinces (Zhe-Gan-Wan region) was probably related to low fO2 magmatism in the Cretaceous.


BMJ Open ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. e047524
Author(s):  
Claire M Nolan ◽  
Jessica A Walsh ◽  
Suhani Patel ◽  
Ruth E Barker ◽  
Oliver Polgar ◽  
...  

IntroductionPulmonary rehabilitation (PR), an exercise and education programme for people with chronic lung disease, aims to improve exercise capacity, breathlessness and quality of life. Most evidence to support PR is from trials that use specialist exercise equipment, for example, treadmills (PR-gym). However, a significant proportion of programmes do not have access to specialist equipment with training completed with minimal exercise equipment (PR-min). There is a paucity of robust literature examining the efficacy of supervised, centre-based PR-min. We aim to determine whether an 8-week supervised, centre-based PR-min programme is non-inferior to a standard 8-week supervised, centre-based PR-gym programme in terms of exercise capacity and health outcomes for patients with chronic lung disease.Methods and analysisParallel, two-group, assessor-blinded and statistician-blinded, non-inferiority randomised trial. 436 participants will be randomised using minimisation at the individual level with a 1:1 allocation to PR-min (intervention) or PR-gym (control). Assessment will take place pre-PR (visit 1), post-PR (visit 2) and 12 months following visit 1 (visit 3). Exercise capacity (incremental shuttle walk test), dyspnoea (Chronic Respiratory Questionnaire (CRQ)-Dyspnoea), health-related quality of life (CRQ), frailty (Short Physical Performance Battery), muscle strength (isometric quadriceps maximum voluntary contraction), patient satisfaction (Global Rating of Change Questionnaire), health economic as well as safety and trial process data will be measured. The primary outcome is change in exercise capacity between visit 1 and visit 2. Two sample t-tests on an intention to treat basis will be used to estimate the difference in mean primary and secondary outcomes between patients randomised to PR-gym and PR-min.Ethics and disseminationLondon-Camden and Kings Cross Research Ethics Committee and Health Research Authority have approved the study (18/LO/0315). Results will be submitted for publication in peer-reviewed journals, presented at international conferences, disseminated through social media, patient and public routes and directly shared with stakeholders.Trial registration numberISRCTN16196765.


2014 ◽  
Vol 37 (1) ◽  
pp. 141-157 ◽  
Author(s):  
Mariusz Łapczyński ◽  
Bartłomiej Jefmański

Abstract Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models), customers who purchase additional products (cross- and up-sell models) or customers intending to resign from the cooperation (churn models). During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm) and cluster analysis (k-means). During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.


Author(s):  
Júlio Hoffimann ◽  
Maciel Zortea ◽  
Breno de Carvalho ◽  
Bianca Zadrozny

Statistical learning theory provides the foundation to applied machine learning, and its various successful applications in computer vision, natural language processing and other scientific domains. The theory, however, does not take into account the unique challenges of performing statistical learning in geospatial settings. For instance, it is well known that model errors cannot be assumed to be independent and identically distributed in geospatial (a.k.a. regionalized) variables due to spatial correlation; and trends caused by geophysical processes lead to covariate shifts between the domain where the model was trained and the domain where it will be applied, which in turn harm the use of classical learning methodologies that rely on random samples of the data. In this work, we introduce the geostatistical (transfer) learning problem, and illustrate the challenges of learning from geospatial data by assessing widely-used methods for estimating generalization error of learning models, under covariate shift and spatial correlation. Experiments with synthetic Gaussian process data as well as with real data from geophysical surveys in New Zealand indicate that none of the methods are adequate for model selection in a geospatial context. We provide general guidelines regarding the choice of these methods in practice while new methods are being actively researched.


Sign in / Sign up

Export Citation Format

Share Document