Conductivity models for Archie rocks

Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. WA109-WA128 ◽  
Author(s):  
W. David Kennedy ◽  
David C. Herrick

The petroleum industry’s standard porosity-resistivity model (i.e., Archie’s law), although it is fit for its purpose, remains poorly understood after seven decades of use. This results from the choice of the graphical display and trend formula used to analyze Archie’s seminal porosity-resistivity data, taken in the Nacatoch sandstone, a petroliferous clastic formation in the Gulf of Mexico coastal area. Archie’s model accurately predicts the conductivity-brine volume trend for this sandstone. Not all rocks follow the same porosity-resistivity trends observed in the Nacatoch sandstone, but those that do are defined as Archie rocks. Archie’s Nacatoch sandstone data set has significant irreducible scatter, or noise. Data with significant scatter cannot be used to uniquely define a trend. Alternative graphical analyses of Archie’s Nacatoch sandstone data indicates that Archie could have analyzed these data differently had it occurred to him to do so. A physics-based porosity-conductivity model, a “geometrical factor theory” (GFT), is preferred as an alternative to the Archie model because it has a physical interpretation. In this model, the bulk conductivity of an Archie rock is the product of three factors: brine conductivity, fractional brine volume, and an explicit geometrical factor. The model is offered in the form of a theorem, proved in three steps, to make our arguments as explicit and transparent as possible. The model is developed through its culmination as a saturation equation to illustrate that it is a complete theory for Archie rocks. The predictive power of the Archie model and GFT are similar, but unlike the adjustable parameters of the Archie model ([Formula: see text], [Formula: see text], and [Formula: see text]), all of the parameters of GFT have a priori physical interpretations. Through a connection to site percolation theory, GFT has promise to connect porosity-conductivity interpretation to circuit theory first principles.

2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2019 ◽  
Vol 219 (3) ◽  
pp. 2056-2072
Author(s):  
A Carrier ◽  
F Fischanger ◽  
J Gance ◽  
G Cocchiararo ◽  
G Morelli ◽  
...  

SUMMARY The growth of the geothermal industry sector requires innovative methods to reduce exploration costs whilst minimizing uncertainty during subsurface exploration. Until now geoelectrical prospection had to trade between logistically complex cabled technologies reaching a few hundreds meters deep versus shallow-reaching prospecting methods commonly used in hydro-geophysical studies. We present a recent technology for geoelectrical prospection, and show how geoelectrical methods may allow the investigation of medium-enthalpy geothermal resources until about 1 km depth. The use of the new acquisition system, which is made of a distributed set of independent electrical potential recorders, enabled us to tackle logistics and noise data issues typical of urbanized areas. We acquired a 4.5-km-long 2-D geoelectrical survey in an industrial area to investigate the subsurface structure of a sedimentary sequence that was the target of a ∼700 m geothermal exploration well (Geo-01, Satigny) in the Greater Geneva Basin, Western Switzerland. To show the reliability of this new method we compared the acquired resistivity data against reflection seismic and gravimetric data and well logs. The processed resistivity model is consistent with the interpretation of the active-seismic data and density variations computed from the inversion of the residual Bouguer anomaly. The combination of the resistivity and gravity models suggest the presence of a low resistivity and low density body crossing Mesozoic geological units up to Palaeogene–Neogene units that can be used for medium-enthalpy geothermal exploitation. Our work points out how new geoelectrical methods may be used to identify thermal groundwater at depth. This new cost-efficient technology may become an effective and reliable exploration method for the imaging of shallow geothermal resources.


2015 ◽  
Vol 8 (2) ◽  
pp. 941-963 ◽  
Author(s):  
T. Vlemmix ◽  
F. Hendrick ◽  
G. Pinardi ◽  
I. De Smedt ◽  
C. Fayt ◽  
...  

Abstract. A 4-year data set of MAX-DOAS observations in the Beijing area (2008–2012) is analysed with a focus on NO2, HCHO and aerosols. Two very different retrieval methods are applied. Method A describes the tropospheric profile with 13 layers and makes use of the optimal estimation method. Method B uses 2–4 parameters to describe the tropospheric profile and an inversion based on a least-squares fit. For each constituent (NO2, HCHO and aerosols) the retrieval outcomes are compared in terms of tropospheric column densities, surface concentrations and "characteristic profile heights" (i.e. the height below which 75% of the vertically integrated tropospheric column density resides). We find best agreement between the two methods for tropospheric NO2 column densities, with a standard deviation of relative differences below 10%, a correlation of 0.99 and a linear regression with a slope of 1.03. For tropospheric HCHO column densities we find a similar slope, but also a systematic bias of almost 10% which is likely related to differences in profile height. Aerosol optical depths (AODs) retrieved with method B are 20% high compared to method A. They are more in agreement with AERONET measurements, which are on average only 5% lower, however with considerable relative differences (standard deviation ~ 25%). With respect to near-surface volume mixing ratios and aerosol extinction we find considerably larger relative differences: 10 ± 30, −23 ± 28 and −8 ± 33% for aerosols, HCHO and NO2 respectively. The frequency distributions of these near-surface concentrations show however a quite good agreement, and this indicates that near-surface concentrations derived from MAX-DOAS are certainly useful in a climatological sense. A major difference between the two methods is the dynamic range of retrieved characteristic profile heights which is larger for method B than for method A. This effect is most pronounced for HCHO, where retrieved profile shapes with method A are very close to the a priori, and moderate for NO2 and aerosol extinction which on average show quite good agreement for characteristic profile heights below 1.5 km. One of the main advantages of method A is the stability, even under suboptimal conditions (e.g. in the presence of clouds). Method B is generally more unstable and this explains probably a substantial part of the quite large relative differences between the two methods. However, despite a relatively low precision for individual profile retrievals it appears as if seasonally averaged profile heights retrieved with method B are less biased towards a priori assumptions than those retrieved with method A. This gives confidence in the result obtained with method B, namely that aerosol extinction profiles tend on average to be higher than NO2 profiles in spring and summer, whereas they seem on average to be of the same height in winter, a result which is especially relevant in relation to the validation of satellite retrievals.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. F25-F34 ◽  
Author(s):  
Benoit Tournerie ◽  
Michel Chouteau ◽  
Denis Marcotte

We present and test a new method to correct for the static shift affecting magnetotelluric (MT) apparent resistivity sounding curves. We use geostatistical analysis of apparent resistivity and phase data for selected periods. For each period, we first estimate and model the experimental variograms and cross variogram between phase and apparent resistivity. We then use the geostatistical model to estimate, by cokriging, the corrected apparent resistivities using the measured phases and apparent resistivities. The static shift factor is obtained as the difference between the logarithm of the corrected and measured apparent resistivities. We retain as final static shift estimates the ones for the period displaying the best correlation with the estimates at all periods. We present a 3D synthetic case study showing that the static shift is retrieved quite precisely when the static shift factors are uniformly distributed around zero. If the static shift distribution has a nonzero mean, we obtained best results when an apparent resistivity data subset can be identified a priori as unaffected by static shift and cokriging is done using only this subset. The method has been successfully tested on the synthetic COPROD-2S2 2D MT data set and on a 3D-survey data set from Las Cañadas Caldera (Tenerife, Canary Islands) severely affected by static shift.


Paleobiology ◽  
2016 ◽  
Vol 43 (1) ◽  
pp. 68-84 ◽  
Author(s):  
Bradley Deline ◽  
William I. Ausich

AbstractA priori choices in the detail and breadth of a study are important in addressing scientific hypotheses. In particular, choices in the number and type of characters can greatly influence the results in studies of morphological diversity. A new character suite was constructed to examine trends in the disparity of early Paleozoic crinoids. Character-based rarefaction analysis indicated that a small subset of these characters (~20% of the complete data set) could be used to capture most of the properties of the entire data set in analyses of crinoids as a whole, noncamerate crinoids, and to a lesser extent camerate crinoids. This pattern may be the result of the covariance between characters and the characterization of rare morphologies that are not represented in the primary axes in morphospace. Shifting emphasis on different body regions (oral system, calyx, periproct system, and pelma) also influenced estimates of relative disparity between subclasses of crinoids. Given these results, morphological studies should include a pilot analysis to better examine the amount and type of data needed to address specific scientific hypotheses.


1901 ◽  
Vol 47 (197) ◽  
pp. 362-362
Keyword(s):  
A Priori ◽  

Dr. W. Watson (Edinburgh) writes in reference to Dr. Ireland's study of Nietzsche, that Nietzsche's acute sense of smell is very characteristic. He regards it as a reversion to a lower type. But specially Dr. Watson says, “The weakness of his sexual instinct is childlike. He seems to have combined the intellect of a man with the morality of a child. Does such a combination often accompany non-developed sexual instincts? It ought to do so a priori.”


2015 ◽  
Vol 12 (5) ◽  
pp. 1339-1356 ◽  
Author(s):  
N. S. Jones ◽  
A. Ridgwell ◽  
E. J. Hendy

Abstract. Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle; it is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are needed for understanding how carbonate deposition responds to environmental conditions including atmospheric CO2 concentrations in the past and into the future. However, before any projections can be made, the basic test is to establish model skill in recreating present-day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales. We also compile available global data on reef calcification to produce an independent observation-based data set for the model evaluation of carbonate budget outputs. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically developed global framework, the Global Reef Accretion Model (GRAM). No model was able to reproduce independent rate estimates of whole-reef calcification, and the output from the temperature-only based approach was the only model to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modelling approach, accounting for population dynamics in terms of mortality and recruitment and hence calcifier abundance, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.


2019 ◽  
Vol 26 ◽  
pp. 71-88
Author(s):  
Ana Belén Pérez García

The figure of the tragic mulatta placed its origin in antebellum literature and was extensively used in the literature of the nineteenth and twentieth century. Much has been written about this literary character in a time when the problem of miscegenation was at its highest point, and when studies established that races were inherently different, meaning that the black race was inferior to the white one. Many authors have made use of this trope for different purposes, and Zora Neale Hurston was one of them. In her novel Their Eyes Were Watching God, Hurston creates Janie, a mulatta that a priori follows all the characteristics of this type of female character who, however, breaks away from most of them. She overcomes all stereotypes and prejudices, those imposed on her because of her condition of interracial offspring, and is able to take charge of her own life and challenge all these impositions feeling closer to her blackness and celebrating and empowering her female identity. In this vein, storytelling becomes the liberating force that helps her do so. It will become the tool that will enable her to ignore the need of passing as a white person and provide her with the opportunity to connect with her real identity and so feel free and happy, breaking with the tragic destiny of mulatta characters. Keywords: storytelling, tragic mulatta, blackness, Hurston.  


Author(s):  
N. Seube

Abstract. This paper introduce a new method for validating the precision of an airborne or a mobile LiDAR data set. The proposed method is based on the knowledge of an a Combined Standard Measurement Uncertainty (CSMU) model which describes LiDAR point covariance matrix and thus uncertainty ellipsoid. The model we consider includes timing errors and most importantly the incidence of the LiDAR beam. After describing the relationship between the beam incidence and other variable uncertainty (especially attitude uncertainty), we show that we can construct a CSMU model giving the covariance of each oint as a function of the relative geometry between the LiDAR beam and the point normal. The validation method we propose consist in comparing the CSMU model (predictive a priori uncertainty) t the Standard Deviation Alog the Surface Normal (SDASN), for all set of quasi planr segments of the point cloud. Whenever the a posteriori (i.e; observed by the SDASN) level of uncertainty is greater than a priori (i.e; expected) level of uncertainty, the point fails the validation test. We illustrate this approach on a dataset acquired by a Microdrones mdLiDAR1000 system.


Sign in / Sign up

Export Citation Format

Share Document