A rapid four-dimensional resistivity data inversion method using temporal segmentation

2020 ◽  
Vol 221 (1) ◽  
pp. 586-602 ◽  
Author(s):  
Bin Liu ◽  
Yonghao Pang ◽  
Deqiang Mao ◽  
Jing Wang ◽  
Zhengyu Liu ◽  
...  

SUMMARY 4-D electrical resistivity tomography (ERT), an important geophysical method, is widely used to observe dynamic processes within static subsurface structures. However, because data acquisition and inversion consume large amounts of time, rapid changes that occur in the medium during a single acquisition cycle are difficult to detect in a timely manner via 4-D inversion. To address this issue, a scheme is proposed in this paper for restructuring continuously measured data sets and performing GPU-parallelized inversion. In this scheme, multiple reference time points are selected in an acquisition cycle, which allows all of the acquired data to be sequentially utilized in a 4-D inversion. In addition, the response of the 4-D inversion to changes in the medium has been enhanced by increasing the weight of new data being added dynamically to the inversion process. To improve the reliability of the inversion, our scheme uses actively varied time-regularization coefficients, which are adjusted according to the range of the changes in model resistivity; this range is predicted by taking the ratio between the independent inversion of the current data set and historical 4-D inversion model. Numerical simulations and experiments show that this new 4-D inversion method is able to locate and depict rapid changes in medium resistivity with a high level of accuracy.

Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. F157-F171 ◽  
Author(s):  
Michael Commer ◽  
Gregory A. Newman ◽  
Kenneth H. Williams ◽  
Susan S. Hubbard

The conductive and capacitive material properties of the subsurface can be quantified through the frequency-dependent complex resistivity. However, the routine three-dimensional (3D) interpretation of voluminous induced polarization (IP) data sets still poses a challenge due to large computational demands and solution nonuniqueness. We have developed a flexible methodology for 3D (spectral) IP data inversion. Our inversion algorithm is adapted from a frequency-domain electromagnetic (EM) inversion method primarily developed for large-scale hydrocarbon and geothermal energy exploration purposes. The method has proven to be efficient by implementing the nonlinear conjugate gradient method with hierarchical parallelism and by using an optimal finite-difference forward modeling mesh design scheme. The method allows for a large range of survey scales, providing a tool for both exploration and environmental applications. We experimented with an image focusing technique to improve the poor depth resolution of surface data sets with small survey spreads. The algorithm’s underlying forward modeling operator properly accounts for EM coupling effects; thus, traditionally used EM coupling correction procedures are not needed. The methodology was applied to both synthetic and field data. We tested the benefit of directly inverting EM coupling contaminated data using a synthetic large-scale exploration data set. Afterward, we further tested the monitoring capability of our method by inverting time-lapse data from an environmental remediation experiment near Rifle, Colorado. Similar trends observed in both our solution and another 2D inversion were in accordance with previous findings about the IP effects due to subsurface microbial activity.


Author(s):  
Agus Wibowo

Abstract: Implementation of guidance and counseling services should be based on the needs and problems of students, so the effectiveness of the service will be achieved to the fullest. But the reality is a lot of implementation of guidance and counseling services in schools, do not notice it. So that the completion of the problems experienced by students sama.Berangkat always use the services of this, the research level of effectiveness of guidance and counseling that implementation has been using the application activity instrumentation and data sets as the basis for an implementation of the service. The method used is a qualitative research subjects that teachers BK and Students at SMA Negeri 1 Metro. Data collection technique through interview, observation and documentation. Research results show that by utilizing activity instrumentation applications and data sets, the counseling services have a high level of effectiveness. In carrying out the service, BK teachers can identify problems and needs experienced by students, so that the efforts of the assistance provided to be more precise, and problem students can terentaskan optimally.Keyword: Guidance and Counseling, Instrumentation Applications, Data Association


2019 ◽  
Vol 491 (4) ◽  
pp. 5238-5247 ◽  
Author(s):  
X Saad-Olivera ◽  
C F Martinez ◽  
A Costa de Souza ◽  
F Roig ◽  
D Nesvorný

ABSTRACT We characterize the radii and masses of the star and planets in the Kepler-59 system, as well as their orbital parameters. The star parameters are determined through a standard spectroscopic analysis, resulting in a mass of $1.359\pm 0.155\, \mathrm{M}_\odot$ and a radius of $1.367\pm 0.078\, \mathrm{R}_\odot$. The obtained planetary radii are $1.5\pm 0.1\, R_\oplus$ for the inner and $2.2\pm 0.1\, R_\oplus$ for the outer planet. The orbital parameters and the planetary masses are determined by the inversion of Transit Timing Variations (TTV) signals. We consider two different data sets: one provided by Holczer et al. (2016), with TTVs only for Kepler-59c, and the other provided by Rowe et al. (2015), with TTVs for both planets. The inversion method applies an algorithm of Bayesian inference (MultiNest) combined with an efficient N-body integrator (Swift). For each of the data set, we found two possible solutions, both having the same probability according to their corresponding Bayesian evidences. All four solutions appear to be indistinguishable within their 2-σ uncertainties. However, statistical analyses show that the solutions from Rowe et al. (2015) data set provide a better characterization. The first solution infers masses of $5.3_{-2.1}^{+4.0}~M_{\mathrm{\oplus }}$ and $4.6_{-2.0}^{+3.6}~M_{\mathrm{\oplus }}$ for the inner and outer planet, respectively, while the second solution gives masses of $3.0^{+0.8}_{-0.8}~M_{\mathrm{\oplus }}$ and $2.6^{+0.9}_{-0.8}~M_{\mathrm{\oplus }}$. These values point to a system with an inner super-Earth and an outer mini-Neptune. A dynamical study shows that the planets have almost co-planar orbits with small eccentricities (e < 0.1), close to the 3:2 mean motion resonance. A stability analysis indicates that this configuration is stable over million years of evolution.


Author(s):  
Arminée Kazanjian ◽  
Kathryn Friesen

AbstractIn order to explore the diffusion of the selected technologies in one Canadian province (British Columbia), two administrative data sets were analyzed. The data included over 40 million payment records for each fiscal year on medical services provided to British Columbia residents (2,968,769 in 1988) and information on physical facilities, services, and personnel from 138 hospitals in the province. Three specific time periods were examined in each data set, starting with 1979–80 and ending with the most current data available at the time. The detailed retrospective analysis of laboratory and imaging technologies provides historical data in three areas of interest: (a) patterns of diffusion and volume of utilization, (b) institutional profile, and (c) provider profile. The framework for the analysis focused, where possible, on the examination of determinants of diffusion that may be amenable to policy influence.


2007 ◽  
Vol 73 ◽  
pp. 169-190 ◽  
Author(s):  
Mandy Jay ◽  
Michael P. Richards

This paper presents the results of new research into British Iron Age diet. Specifically, it summarises the existing evidence and compares this with new evidence obtained from stable isotope analysis. The isotope data come from both humans and animals from ten British middle Iron Age sites, from four locations in East Yorkshire, East Lothian, Hampshire, and Cornwall. These represent the only significant data-set of comparative humans (n = 138) and animals (n = 212) for this period currently available for the UK. They are discussed here alongside other evidence for diet during the middle Iron Age in Britain. In particular, the question of whether fish, or other aquatic foods, were a major dietary resource during this period is examined.The isotopic data suggest similar dietary protein consumption patterns across the groups, both within local populations and between them, although outliers do exist which may indicate mobile individuals moving into the sites. The diet generally includes a high level of animal protein, with little indication of the use of marine resources at any isotopically distinguishable level, even when the sites are situated directly on the coast. The nitrogen isotopic values also indicate absolute variation across these locations which is indicative of environmental background differences rather than differential consumption patterns and this is discussed in the context of the difficulty of interpreting isotopic data without a complete understanding of the ‘baseline’ values for any particular time and place. This reinforces the need for significant numbers of contemporaneous animals to be analysed from the same locations when interpreting human data-sets.


Geophysics ◽  
2000 ◽  
Vol 65 (3) ◽  
pp. 791-803 ◽  
Author(s):  
Weerachai Siripunvaraporn ◽  
Gary Egbert

There are currently three types of algorithms in use for regularized 2-D inversion of magnetotelluric (MT) data. All seek to minimize some functional which penalizes data misfit and model structure. With the most straight‐forward approach (exemplified by OCCAM), the minimization is accomplished using some variant on a linearized Gauss‐Newton approach. A second approach is to use a descent method [e.g., nonlinear conjugate gradients (NLCG)] to avoid the expense of constructing large matrices (e.g., the sensitivity matrix). Finally, approximate methods [e.g., rapid relaxation inversion (RRI)] have been developed which use cheaply computed approximations to the sensitivity matrix to search for a minimum of the penalty functional. Approximate approaches can be very fast, but in practice often fail to converge without significant expert user intervention. On the other hand, the more straightforward methods can be prohibitively expensive to use for even moderate‐size data sets. Here, we present a new and much more efficient variant on the OCCAM scheme. By expressing the solution as a linear combination of rows of the sensitivity matrix smoothed by the model covariance (the “representers”), we transform the linearized inverse problem from the M-dimensional model space to the N-dimensional data space. This method is referred to as DASOCC, the data space OCCAM’s inversion. Since generally N ≪ M, this transformation by itself can result in significant computational saving. More importantly the data space formulation suggests a simple approximate method for constructing the inverse solution. Since MT data are smooth and “redundant,” a subset of the representers is typically sufficient to form the model without significant loss of detail. Computations required for constructing sensitivities and the size of matrices to be inverted can be significantly reduced by this approximation. We refer to this inversion as REBOCC, the reduced basis OCCAM’s inversion. Numerical experiments on synthetic and real data sets with REBOCC, DASOCC, NLCG, RRI, and OCCAM show that REBOCC is faster than both DASOCC and NLCG, which are comparable in speed. All of these methods are significantly faster than OCCAM, but are not competitive with RRI. However, even with a simple synthetic data set, we could not always get RRI to converge to a reasonable solution. The basic idea behind REBOCC should be more broadly applicable, in particular to 3-D MT inversion.


2011 ◽  
Vol 29 (7) ◽  
pp. 1317-1330 ◽  
Author(s):  
I. Fiorucci ◽  
G. Muscari ◽  
R. L. de Zafra

Abstract. The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that v1 and v2 profiles are overall consistent. The main difference is at the HNO3 mixing ratio maximum in the 20–25 km altitude range, which is smaller in v2 than v1 profiles by up to 2 ppbv at mid-latitudes and during the Antarctic fall. This difference suggests a better agreement of GBMS HNO3 v2 profiles with both UARS/ and EOS Aura/MLS HNO3 data than previous v1 profiles.


2017 ◽  
Author(s):  
Peter Berg ◽  
Chantal Donnelly ◽  
David Gustafsson

Abstract. Updating climatological forcing data to near current data are compelling for impact modelling, e.g. to update model simulations or to simulate recent extreme events. Hydrological simulations are generally sensitive to bias in the meteorological forcing data, especially relative to the data used for the calibration of the model. The lack of daily resolution data at a global scale has previously been solved by adjusting re-analysis data global gridded observations. However, existing data sets of this type have been produced for a fixed past time period, determined by the main global observational data sets. Long delays between updates of these data sets leaves a data gap between present and the end of the data set. Further, hydrological forecasts require initialisations of the current state of the snow, soil, lake (and sometimes river) storage. This is normally conceived by forcing the model with observed meteorological conditions for an extended spin-up period, typically at a daily time step, to calculate the initial state. Here, we present a method named GFD (Global Forcing Data) to combine different data sets in order to produce near real-time updated hydrological forcing data that are compatible with the products covering the climatological period. GFD resembles the already established WFDEI method (Weedon et al., 2014) closely, but uses updated climatological observations, and for the near real-time it uses interim products that apply similar methods. This allows GFD to produce updated forcing data including the previous calendar month around the 10th of each month. We present the GFD method and different produced data sets, which are evaluated against the WFDEI data set, as well as with hydrological simulations with the HYPE model over Europe and the Arctic region. We show that GFD performs similarly to WFDEI and that the updated period significantly reduces the bias of the reanalysis data, although less well for the last two months of the updating cycle. For real-time updates until the current day, extending GFD with operational meteorological forecasts, a large drift is present in the hydrological simulations due to the bias of the meteorological forecasting model.


2021 ◽  
Vol 8 ◽  
Author(s):  
Amelia Moura ◽  
Brian Beck ◽  
Renee Duffey ◽  
Lucas McEachron ◽  
Margaret Miller ◽  
...  

In the past decade, the field of coral reef restoration has experienced a proliferation of data detailing the source, genetics, and performance of coral strains used in research and restoration. Resource managers track the multitude of permits, species, restoration locations, and performance across multiple stakeholders while researchers generate large data sets and data pipelines detailing the genetic, genomic, and phenotypic variants of corals. Restoration practitioners, in turn, maintain records on fragment collection, genet performance, outplanting location and survivorship. While each data set is important in its own right, collectively they can provide deeper insights into coral biology and better guide coral restoration endeavors – unfortunately, current data sets are siloed with limited ability to cross-mine information for deeper insights and hypothesis testing. Herein we present the Coral Sample Registry (CSR), an online resource that establishes the first step in integrating diverse coral restoration data sets. Developed in collaboration with academia, management agencies, and restoration practitioners in the South Florida area, the CSR centralizes information on sample collection events by issuing a unique accession number to each entry. Accession numbers can then be incorporated into existing and future data structures. Each accession number is unique and corresponds to a specific collection event of coral tissue, whether for research, archiving, or restoration purposes. As such the accession number can serve as the key to unlock the diversity of information related to that sample’s provenance and characteristics across any and all data structures that include the accession number field. The CSR is open-source and freely available to users, designed to be suitable for all coral species in all geographic regions. Our goal is that this resource will be adopted by researchers, restoration practitioners, and managers to efficiently track coral samples through all data structures and thus enable the unlocking of a broader array of insights.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M1-M13 ◽  
Author(s):  
Yichuan Wang ◽  
Igor B. Morozov

For seismic monitoring injected fluids during enhanced oil recovery or geologic [Formula: see text] sequestration, it is useful to measure time-lapse (TL) variations of acoustic impedance (AI). AI gives direct connections to the mechanical and fluid-related properties of the reservoir or [Formula: see text] storage site; however, evaluation of its subtle TL variations is complicated by the low-frequency and scaling uncertainties of this attribute. We have developed three enhancements of TL AI analysis to resolve these issues. First, following waveform calibration (cross-equalization) of the monitor seismic data sets to the baseline one, the reflectivity difference was evaluated from the attributes measured during the calibration. Second, a robust approach to AI inversion was applied to the baseline data set, based on calibration of the records by using the well-log data and spatially variant stacking and interval velocities derived during seismic data processing. This inversion method is straightforward and does not require subjective selections of parameterization and regularization schemes. Unlike joint or statistical inverse approaches, this method does not require prior models and produces accurate fitting of the observed reflectivity. Third, the TL AI difference is obtained directly from the baseline AI and reflectivity difference but without the uncertainty-prone subtraction of AI volumes from different seismic vintages. The above approaches are applied to TL data sets from the Weyburn [Formula: see text] sequestration project in southern Saskatchewan, Canada. High-quality baseline and TL AI-difference volumes are obtained. TL variations within the reservoir zone are observed in the calibration time-shift, reflectivity-difference, and AI-difference images, which are interpreted as being related to the [Formula: see text] injection.


Sign in / Sign up

Export Citation Format

Share Document