Conductivity‐depth imaging of airborne electromagnetic step‐response data

Geophysics ◽  
1991 ◽  
Vol 56 (1) ◽  
pp. 102-114 ◽  
Author(s):  
J. C. Macnae ◽  
Richard Smith ◽  
B. D. Polzer ◽  
Y. Lamontagne ◽  
P. S. Klinkert

An adaptation of the Macnae‐Lamontagne method allows transform of airborne step‐response electromagnetic (EM) data to a conductivity‐depth image. The algorithm is based on a nonlinear transformation of the amplitude of the measured response at each delay time to an apparent mirror image depth. Using matrix algebra, the set of mirror image depth‐delay time data pairs can then be used to derive a conductivity section. Data can be efficiently processed on a personal computer at rates faster or comparable to the rate required for collection. Stable conductivity fitting as a function of depth is obtained by damping the matrix inversion by specification of the first‐ and second‐derivative smoothness weights of the fitted conductivity‐depth sounding. Damping parameters may be either fixed or varied along the profile; their choice can be constrained by geologic control. Stability of the process is enhanced by accounting for the transmitter and receiver tilts. The mirror image depth‐delay time data can also be used directly with simple regression to obtain the best‐fitting thin‐sheet and half‐space models. With one novel assumption, the thin‐sheet model can be converted to a thick‐sheet overburden model without prespecification of either its conductivity or thickness. Depending on the geology, these simple models may prove quite useful. The conductivity imaging algorithm has been applied to a test data set collected with the SPECTREM system. The stability and speed of the imaging process were confirmed and have demonstrated airborne EM sounding to depths well over 400 m in an area with quite conductive sediments. Comparing the results with a better resolved image obtained from ground UTEM data shows that the airborne data can adequately define the geometry of the uppermost conductor encountered in the section. The geophysical results are consistent with geologic control and measurements of resistivity obtained from well logs.

Geophysics ◽  
2000 ◽  
Vol 65 (4) ◽  
pp. 1124-1127 ◽  
Author(s):  
Richard S. Smith

The integral of the step response from zero time to infinite time (the ideal resistive limit) can be used to determine the conductance of the ground, in theory, because the former is directly proportional to the latter. However, in a real time‐domain airborne electromagnetic (AEM) system, it is impossible to measure the step response, or the ideal resistive limit. This is because (1) the off time is finite, being interrupted by the next transmitter pulse; (2) the total effect of all previous transmitter pulses is to reduce the measured response; and (3) the process of removing the primary field during the on time removes a component of the secondary response that has the same shape as the primary response. With a real time‐domain AEM system, it is possible to estimate what is defined as the realizable resistive limit (RRL). The RRL can also be calculated theoretically for a horizontal thin sheet of known conductance. Hence, the measured data can be input into a nonlinear inversion scheme and used to estimate an apparent conductance. RRL is calculated using on‐time data, which is above the noise level between 0.001 S and 100 000 S, so it is possible to map conductances in this eight‐decade range. Traditional methods for deriving conductance use off‐time data only and are restricted to a much smaller range of values (i.e., about two decades). A field example illustrates that, within the resistive areas, the RRL map shows many structural features and lithologies that are not evident on the map of conductance derived using off‐time data. Within the conductive areas, the RRL image shows greater variation; a number of geologically meaningful features are also apparent. Another advantage of RRL images is that artifacts associated with current migration near the edge of conductive features are not as evident as they are in the off‐time‐derived conductance images.


Materials ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4200
Author(s):  
Zhigang Li ◽  
Hao Jiang ◽  
Minghui Wang ◽  
Hongjie Jia ◽  
Hongjiang Han ◽  
...  

As the applications of heterogeneous materials expand, aluminum laminates of similar materials have attracted much attention due to their greater bonding strength and easier recycling. In this work, an alloy design strategy was developed based on accumulative roll bonding (ARB) to produce laminates from similar materials. Twin roll casting (TRC) sheets of the same composition but different cooling rates were used as the starting materials, and they were roll bonded up to three cycles at varying temperatures. EBSD showed that the two TRC sheets deformed in distinct ways during ARB processes at 300°C. Major recrystallizations were significant after the first cycle on the thin sheet and after the third cycle on the thick sheet. The sheets were subject to subsequent aging for better mechanical properties. TEM observations showed that the size and distribution of nano-precipitations were different between the two sheet sides. These nano-precipitations were found to significantly promote precipitation strengthening, and such a promotive effect was referred to as hetero-deformation induced (HDI) strengthening. Our work provides a new promising method to prepare laminated heterogeneous materials with similar alloy TRC sheets.


2018 ◽  
Vol 34 (3) ◽  
pp. 1247-1266 ◽  
Author(s):  
Hua Kang ◽  
Henry V. Burton ◽  
Haoxiang Miao

Post-earthquake recovery models can be used as decision support tools for pre-event planning. However, due to a lack of available data, there have been very few opportunities to validate and/or calibrate these models. This paper describes the use of building damage, permitting, and repair data from the 2014 South Napa Earthquake to evaluate a stochastic process post-earthquake recovery model. Damage data were obtained for 1,470 buildings, and permitting and repair time data were obtained for a subset (456) of those buildings. A “blind” prediction is shown to adequately capture the shape of the recovery trajectory despite overpredicting the overall pace of the recovery. Using the mean time to permit and repair time from the acquired data set significantly improves the accuracy of the recovery prediction. A generalized model is formulated by establishing statistical relationships between key time parameters and endogenous and exogenous factors that have been shown to influence the pace of recovery.


2020 ◽  
Vol 21 (18) ◽  
pp. 6947
Author(s):  
Filipe Costa ◽  
Ali Traoré-Dubuis ◽  
Lidia Álvarez ◽  
Ana I. Lozano ◽  
Xueguang Ren ◽  
...  

Electron scattering cross sections for pyridine in the energy range 0–100 eV, which we previously measured or calculated, have been critically compiled and complemented here with new measurements of electron energy loss spectra and double differential ionization cross sections. Experimental techniques employed in this study include a linear transmission apparatus and a reaction microscope system. To fulfill the transport model requirements, theoretical data have been recalculated within our independent atom model with screening corrected additivity rule and interference effects (IAM-SCAR) method for energies above 10 eV. In addition, results from the R-matrix and Schwinger multichannel with pseudopotential methods, for energies below 15 eV and 20 eV, respectively, are presented here. The reliability of this complete data set has been evaluated by comparing the simulated energy distribution of electrons transmitted through pyridine, with that observed in an electron-gas transmission experiment under magnetic confinement conditions. In addition, our representation of the angular distribution of the inelastically scattered electrons is discussed on the basis of the present double differential cross section experimental results.


2019 ◽  
Vol 31 (1) ◽  
pp. 88-94 ◽  
Author(s):  
Xiangbo Kong ◽  
◽  
Zelin Meng ◽  
Lin Meng ◽  
Hiroyuki Tomiyama

Currently, the proportion of elderly persons is increasing all over the world, and accidents involving falls have become a serious problem especially for those who live alone. In this paper, an enhancement to our algorithm to detect such falls in an elderly person’s living room is proposed. Our previous algorithm obtains a binary image by using a depth camera and obtains an outline of the binary image by Canny edge detection. This algorithm then calculates the tangent vector angles of each outline pixels and divide them into 15° range groups. If most of the tangent angles are below 45°, a fall is detected. Traditional fall detection systems cannot detect falls towards the camera so at least two cameras are necessary in related works. To detect falls towards the camera, this study proposes the addition of a three-states-transition method to distinguish a fall state from a sitting-down one. The proposed algorithm computes the different position states and divides these states into three groups to detect the person’s current state. Futhermore, transition speed is calculated in order to differentiate sit states from fall states. This study constructes a data set that includes over 1500 images, and the experimental evaluation of the images demonstrates that our enhanced algorithm is effective for detecting the falls with only a single camera.


Geophysics ◽  
1985 ◽  
Vol 50 (8) ◽  
pp. 1350-1354 ◽  
Author(s):  
S. S. Rai

The horizontal, conducting thin‐sheet model represents a special interest in interpretation of electromagnetic field data since it is a suitable interpretation model for the surficial conductive layer, a common occurrence in many terrains. For small thicknesses of overburden layers [Formula: see text]separation) the resolution of layer thickness and conductivity is not possible and interpretation needs to be carried out in terms of the layer conductance. An attractive feature of the thin‐sheet model is the simplicity with which the time‐domain response [Formula: see text] can be calculated. The step response of an infinitely thin layer was derived by Maxwell (1891). In this paper I derive the Crone pulse electromagnetic (PEM) response of a conducting infinitely thin horizontal layer. Applicability of the study is demonstrated by means of a field example.


2012 ◽  
Vol 51 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Jorge A. Achcar ◽  
Emílio A. Coelho-Barros ◽  
Josmar Mazucheli

ABSTRACT We introduce the Weibull distributions in presence of cure fraction, censored data and covariates. Two models are explored in this paper: mixture and non-mixture models. Inferences for the proposed models are obtained under the Bayesian approach, using standard MCMC (Markov Chain Monte Carlo) methods. An illustration of the proposed methodology is given considering a life- time data set.


Paleobiology ◽  
2017 ◽  
Vol 43 (4) ◽  
pp. 667-692 ◽  
Author(s):  
Corentin Gibert ◽  
Gilles Escarguel

AbstractEstimating biodiversity and its variations through geologic time is a notoriously difficult task, due to several taphonomic and methodological effects that make the reconstructed signal potentially distinct from the unknown, original one. Through a simulation approach, we examine the effect of a major, surprisingly still understudied, source of potential disturbance: the effect of time discretization through biochronological construction, which generates spurious coexistences of taxa within discrete time intervals (i.e., biozones), and thus potentially makes continuous- and discrete-time biodiversity curves very different. Focusing on the taxonomic-richness dimension of biodiversity (including estimates of origination and extinction rates), our approach relies on generation of random continuous-time richness curves, which are then time-discretized to estimate the noise generated by this manipulation. A broad spectrum of data-set parameters (including average taxon longevity and biozone duration, total number of taxa, and simulated time interval) is evaluated through sensitivity analysis. We show that the deteriorating effect of time discretization on the richness signal depends highly on such parameters, most particularly on average biozone duration and taxonomic longevity because of their direct relationship with the number of false coexistences generated by time discretization. With several worst-case but realistic parameter combinations (e.g., when relatively short-lived taxa are analyzed in a long-ranging biozone framework), the original and time-discretized richness curves can ultimately show a very weak to zero correlation, making these two time series independent. Based on these simulation results, we propose a simple algorithm allowing the back-transformation of a discrete-time taxonomic-richness data set, as customarily constructed by paleontologists, into a continuous-time data set. We show that the reconstructed richness curve obtained this way fits the original signal much more closely, even when the parameter combination of the original data set is particularly adverse to an effective time-discretized reconstruction.


2014 ◽  
Vol 14 (3) ◽  
pp. 1635-1648 ◽  
Author(s):  
A. Redondas ◽  
R. Evans ◽  
R. Stuebi ◽  
U. Köhler ◽  
M. Weber

Abstract. The primary ground-based instruments used to report total column ozone (TOC) are Brewer and Dobson spectrophotometers in separate networks. These instruments make measurements of the UV irradiances, and through a well-defined process, a TOC value is produced. Inherent to the algorithm is the use of a laboratory-determined cross-section data set. We used five ozone cross-section data sets: three data sets that are based on measurements of Bass and Paur; one derived from Daumont, Brion and Malicet (DBM); and a new set determined by Institute of Experimental Physics (IUP), University of Bremen. The three Bass and Paur (1985) sets are as follows: quadratic temperature coefficients from the IGACO (a glossary is provided in Appendix A) web page (IGQ4), the Brewer network operational calibration set (BOp), and the set used by Bernhard et al. (2005) in the reanalysis of the Dobson absorption coefficient values (B05). The ozone absorption coefficients for Brewer and Dobson instruments are then calculated using the normal Brewer operative method, which is essentially the same as that used for Dobson instruments. Considering the standard TOC algorithm for the Brewer instruments and comparing to the Brewer standard operational calibration data set, using the slit functions for the individual instruments, we find the IUP data set changes the calculated TOC by −0.5%, the DBM data set changes the calculated TOC by −3.2%, and the IGQ4 data set at −45 °C changes the calculated TOC by +1.3%. Considering the standard algorithm for the Dobson instruments, and comparing to results using the official 1992 ozone absorption coefficients values and the single set of slit functions defined for all Dobson instruments, the calculated TOC changes by +1%, with little variation depending on which data set is used. We applied the changes to the European Dobson and Brewer reference instruments during the Izaña 2012 Absolute Calibration Campaign. With the application of a common Langley calibration and the IUP cross section, the differences between Brewer and Dobson data sets vanish, whereas using those of Bass and Paur and DBM produces differences of 1.5 and 2%, respectively. A study of the temperature dependence of these cross-section data sets is presented using the Arosa, Switzerland, total ozone record of 2003–2006, obtained from two Brewer-type instruments and one Dobson-type instrument, combined with the stratospheric ozone and temperature profiles from the Payerne soundings in the same period. The seasonal dependence of the differences between the results from the various instruments is greatly reduced with the application of temperature-dependent absorption coefficients, with the greatest reduction obtained using the IUP data set.


Sign in / Sign up

Export Citation Format

Share Document