Estimating the Number of Undiscovered Deposits

Author(s):  
Donald Singer ◽  
W. David Menzie

The third part of three-part assessments is the estimate of some fixed but unknown number of deposits of each type that exist in the delineated tracts. Until the area being considered is thoroughly and extensively drilled, this fixed number of undiscovered deposits, which could be any number including 0, will not be known with certainty. This number of deposits has meaning only in terms of a grade-and-tonnage model. If this requirement did not exist, any wisp of minerals could be considered worthy of estimation, and even in small regions, we would need to estimate millions of “deposits.” For example, it is not difficult to imagine tens of thousands of fist-sized skarn copper “deposits” in parts of western United States—even in this example, we have used “deposit” size to provide important information. In another example, Wilson et al. (1996) estimated five or more epithermal gold vein deposits at the 90 percent level but provided no grade-and-tonnage model, so these estimated deposits could be any size. To provide critical information to decision-makers, the grade-and-tonnage model is key, and the estimated number of deposits that might exist must be from the grade-and-tonnage frequency distributions. In three-part assessments, the parts and estimates are internally consistent in that delineated tracts are consistent with descriptive models, grade-and-tonnage models are consistent with descriptive models and with known deposits in the area, and estimates of number of deposits are consistent with grade-and-tonnage models. Considerable care must be exercised in quantitative resource assessments to prevent the introduction of biased estimates of undiscovered resources. Biases can be introduced into these estimates either by a flawed grade-and-tonnage model or by the lack of consistency of the grade-and-tonnage model with the number-of-deposit estimates. For this reason, consistency of estimates of number of deposits with the grade-and-tonnage models is the most important guideline. Issues about consistency of mineral deposit models are discussed in chapters 3 through 6. Grade-and-tonnage models (chapter 6), which are the first part of three-part assessments, are of particular concern. In this chapter, the focus is on making unbiased estimates of the number of undiscovered deposits.

Author(s):  
Donald Singer ◽  
W. David Menzie

Mineral deposit models are important in quantitative resource assessments for two reasons: (1) grades and tonnages of most deposit types are significantly different (Singer, Cox, and Drew, 1975; Singer and Kouda, 2003), and (2) deposit types occur in different geologic settings that can be identified from geologic maps. If assessments were only conducted to estimate amounts of undiscovered metals, we would need contained metal models, but determining whether the metals might be economic to recover is an important quality of most assessments, and grades and tonnages are necessary to estimate economic viability of mineral deposits (see chapter 5). In this chapter, we focus on the first part of three-part assessments: grade-and-tonnage models. Too few thoroughly explored mineral deposits are available in most areas being assessed for reliable identification of the important geoscience variables or for robust estimation of undiscovered deposits, so we need mineral-deposit models that are generalized. Well-designed and well-constructed grade-and-tonnage models allow mineral economists to determine the possible economic viability of the resources in the region and provide the foundation for planning. Thus, mineral deposit models play the central role in transforming geoscience information to a form useful to policy-makers. Grade-and-tonnage models are fundamental in the development of other kinds of models such as deposit-density and economic filters. Frequency distributions of tonnages and average grades of well-explored deposits of each type are employed as models for grades and tonnages of undiscovered deposits of the same type in geologically similar settings. Grade-and-tonnage models (Cox and Singer, 1986; Mosier and Page, 1988; Bliss, 1992a, 1992b; Cox et al., 2003; Singer, Berger, and Moring, 2008) combined with estimates of number of undiscovered deposits are the fundamental means of translating geologists’ resource assessments into a language that decision-makers can use. For example, creation of a grade-and-tonnage model for rhyolite-hosted Sn deposits in 1986 demonstrated for the first time that 90 percent of such deposits contain less than 4,200 tons of ore. This made it clear that an ongoing research project by the U.S. Geological Survey on this deposit type could have no effect on domestic supplies of tin, and the project was cancelled.


Author(s):  
Donald Singer ◽  
W. David Menzie

Now that all of the fundamental parts of a quantitative mineral resource assessment have been discussed, it is useful to reflect on why all of the work has been done. As mentioned in chapter 1, it is quite easy to generate an assessment of the “potential” for undiscovered mineral resources. Aside from the question of what, if anything, “potential” means, there is the more serious question of whether a decision-maker has any use for it. The three-part form of assessment is part of a system designed to respond to the needs of decision-makers. Although many challenging ideas are presented in this book, it has a different purpose than most academic reports. This book has the same goal as Allais (1957)—to provide information useful to decision makers. Unfortunately, handing a decision-maker a map with some tracts outlined and frequency distributions of some tonnages and grades along with estimates of the number of deposits that might exist along with their associated probabilities is not really being helpful—these need to be converted to a language understandable to others. This chapter summarizes how these various estimates can be combined and put in more useful forms. If assessments were conducted only to estimate amounts of undiscovered metals, we would need contained metal models and estimates of the number of undiscovered deposits. Grades are simply the ratio of contained metal to tons of ore (chapter 6), so contained metal estimates are available for each deposit. In the simplest of all cases, one could estimate the expected number of deposits with equation 8.1 (see chapter 8) and multiply it by the expected amount of metal per deposit, such as the 27,770 tons of copper in table 9.1, to make an estimate of the expected amount of undiscovered metal. As pointed out in chapter 1, expected amounts of resources or their values can be very misleading because they provide no information about how uncommon the expected value can be with skewed frequency distributions that are common in mineral resources; that is, uncertainty is ignored.


2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Dean Eckles ◽  
Brian Karrer ◽  
Johan Ugander

AbstractEstimating the effects of interventions in networks is complicated due to interference, such that the outcomes for one experimental unit may depend on the treatment assignments of other units. Familiar statistical formalism, experimental designs, and analysis methods assume the absence of this interference, and result in biased estimates of causal effects when it exists. While some assumptions can lead to unbiased estimates, these assumptions are generally unrealistic in the context of a network and often amount to assuming away the interference. In this work, we evaluate methods for designing and analyzing randomized experiments under minimal, realistic assumptions compatible with broad interference, where the aim is to reduce bias and possibly overall error in estimates of average effects of a global treatment. In design, we consider the ability to perform random assignment to treatments that is correlated in the network, such as through graph cluster randomization. In analysis, we consider incorporating information about the treatment assignment of network neighbors. We prove sufficient conditions for bias reduction through both design and analysis in the presence of potentially global interference; these conditions also give lower bounds on treatment effects. Through simulations of the entire process of experimentation in networks, we measure the performance of these methods under varied network structure and varied social behaviors, finding substantial bias reductions and, despite a bias–variance tradeoff, error reductions. These improvements are largest for networks with more clustering and data generating processes with both stronger direct effects of the treatment and stronger interactions between units.


2016 ◽  
Vol 9 (2) ◽  
pp. 547 ◽  
Author(s):  
Marta Beatriz García-Moreno ◽  
Susana García-Moreno ◽  
Juan Jose Nájera-Sánchez ◽  
Carmen De Pablos-Heredero

Purpose: to describe the factors that facilitate the adoption of e-business in firms. To go in deep on the factors, resources and capabilities that need to be present in those firms seeking to improve their levels of e-business adoption.Design/methodology/approach: analysis of the literature involving the main theories on business administration, and more specifically, on those related to technology innovation (TI) and information systems (IS), as applicable to the organizational factors that explain the adoption of e-business.Findings: it identifies three main sources of influence: a first group covers the characteristics of the actual firm, which refer to the organisation’s specific features: firm size, the backing of top management, expected benefit, age, the level of human capital, and international projection. A second group of factors includes technology-related characteristics. The third group contains all those aspects in the environment that may affect the firm’s attitude to e-business.Research limitations/implications: the chosen variables play significant role following a review of the studies on the subject, but not all potential ones have been included. The variables have been chosen in view of the large number of studies that have reported conclusive results.Practical implications: the model presented is designed to enable both scholars in this field and decision-makers in strategic matters to reflect upon those aspects that may drive the adoption of e-business, and thereby help them to make more informed decisions on the matter.Social implications: In highly competitive industries, firms need to keep themselves permanently up to speed with technological advances and strategic innovationsOriginality/value: it is the first study that considers three different perspectives: the organizational, the technological and the environmental one.


2019 ◽  
Vol 27 (4) ◽  
pp. 556-571 ◽  
Author(s):  
Laurence Brandenberger

Relational event models are becoming increasingly popular in modeling temporal dynamics of social networks. Due to their nature of combining survival analysis with network model terms, standard methods of assessing model fit are not suitable to determine if the models are specified sufficiently to prevent biased estimates. This paper tackles this problem by presenting a simple procedure for model-based simulations of relational events. Predictions are made based on survival probabilities and can be used to simulate new event sequences. Comparing these simulated event sequences to the original event sequence allows for in depth model comparisons (including parameter as well as model specifications) and testing of whether the model can replicate network characteristics sufficiently to allow for unbiased estimates.


2015 ◽  
Vol 4 (1) ◽  
Author(s):  
Johan Zetterqvist ◽  
Arvid Sjölander

AbstractA common goal of epidemiologic research is to study the association between a certain exposure and a certain outcome, while controlling for important covariates. This is often done by fitting a restricted mean model for the outcome, as in generalized linear models (GLMs) and in generalized estimating equations (GEEs). If the covariates are high-dimensional, then it may be difficult to well specify the model. This is an important concern, since model misspecification may lead to biased estimates. Doubly robust estimation is an estimation technique that offers some protection against model misspecification. It utilizes two models, one for the outcome and one for the exposure, and produces unbiased estimates of the exposure-outcome association if either model is correct, not necessarily both. Despite its obvious appeal, doubly robust estimation is not used on a regular basis in applied epidemiologic research. One reason for this could be the lack of up-to-date software. In this paper we describe a new


2016 ◽  
Vol 30 (4) ◽  
pp. 31-56 ◽  
Author(s):  
Christian Dustmann ◽  
Uta Schönberg ◽  
Jan Stuhler

We classify the empirical literature on the wage impact of immigration into three groups, where studies in the first two groups estimate different relative effects, and studies in the third group estimate the total effect of immigration on wages. We interpret the estimates obtained from the different approaches through the lens of the canonical model to demonstrate that they are not comparable. We then relax two key assumptions in this literature, allowing for inelastic and heterogeneous labor supply elasticities of natives and the "downgrading" of immigrants. “Downgrading” occurs when the position of immigrants in the labor market is systematically lower than the position of natives with the same observed education and experience levels. Downgrading means that immigrants receive lower returns to the same measured skills than natives when these skills are acquired in their country of origin. We show that heterogeneous labor supply elasticities, if ignored, may complicate the interpretation of wage estimates, and particularly the interpretation of relative wage effects. Moreover, downgrading may lead to biased estimates in those approaches that estimate relative effects of immigration, but not in approaches that estimate total effects. We conclude that empirical models that estimate total effects not only answer important policy questions, but are also more robust to alternative assumptions than models that estimate relative effects.


Botany ◽  
2013 ◽  
Vol 91 (5) ◽  
pp. v-x ◽  
Author(s):  
Edward O. Guerrant

Three recent reviews of reintroduction for conservation purposes, which draw on substantial and largely nonoverlapping data sets, have come to strikingly different conclusions about its value. One concludes that “reintroduction is generally unlikely to be a successful conservation strategy as currently conducted”. Another concludes that “…this review cannot conclusively comment on the effectiveness of reintroductions…” The third concludes that there is “strong evidence in support of the notion that reintroduction, especially in combination with ex situ conservation, is a tool that can go a long way toward meeting the needs it was intended to address”. The argument over the conservation value of reintroduction is of more than academic interest. It illustrates a challenge facing land managers and decision makers who may be tempted to act on stated conclusions without thoroughly understanding their underlying assumptions, methodology, and terminology. The differing conclusions can be partially explained by different criteria of what constitutes success, how to measure it, and differing time scales considered. The propriety of reintroduction is briefly discussed and focuses on the following two issues: translocation of naturally occurring individuals to new locations and introduction outside a species' naturally occurring range. Both have appropriate uses but can be used in ways that detract from the survival prospects of taxa.


2021 ◽  
Vol 33 (5) ◽  
pp. 249-258
Author(s):  
Konstantin Borisovich Koshelev ◽  
Andrei Vladimirovich Osipov ◽  
Sergei Vladimirovich Strijhak

The paper considers the possibility of the ICELIB library, developed at ISP RAS, for modeling ice formation processes on the surface of aircraft. As a test example to compare the accuracy of modeling the physical processes arising during the operation of the aircraft, the surface of a swept wing with a GLC-305 profile was studied. The possibilities of an efficient parallelization algorithm using a liquid film model, a dynamic mesh, and the geometric method of bisectors are discussed. The developed ICELIB library is a collection of three solvers. The first solver iceFoam1 is intended for preliminary estimation of the icing zones of the fuselage surface and aircraft’s swept wing. The change in the geometric shape of the investigated body is neglected, the thickness of ice formation is negligible. This version of the solver has no restrictions on the number of cores when parallelizing. The second version of solver iceDyMFoam2 is designed to simulate the formation of two types of ice, smooth (“Glaze ice”) and loose (“Rime ice"), for which the shape of ice often takes on a complex and bizarre appearance. The effect of changing the shape of the body on the icing process is taken into account. The limitations are related to the peculiarities of the construction of the mesh near the boundary layer of the streamlined body. Different algorithms are used to move the front and back edges of the film, which are optimized for their cases. The performance gain is limited and is achieved with a fixed number of cores. The third version of solver iceDyMFoam3 also allows you to take into account the effect of changes in the surface of a solid during the formation of ice on the icing process itself. For the case of smooth ice formation, the latest version of the solver is still inferior in its capabilities to the second one with complex ice surface shapes. In the third version, a somewhat simplified and more uniform approach is still used to calculate the motion of both boundaries of the ice film. The estimation of the calculation results with the data of the experiment from M. Papadakis for various airfoils and swept wing for the case of “Rime ice” is carried out. Good agreement with the experimental results was obtained.


Sign in / Sign up

Export Citation Format

Share Document