Past, present, and future of geophysical inversion—A new millennium analysis

Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 21-24 ◽  
Author(s):  
Sven Treitel ◽  
Larry Lines

Geophysicists have been working on solutions to the inverse problem since the dawn of our profession. An interpreter infers subsurface properties on the basis of observed data sets, such as seismograms or potential field recordings. A rough model of the process that produces the recorded data resides within the interpreter’s brain; the interpreter then uses this rough mental model to reconstruct subsurface properties from the observed data. In modern parlance, the inference of subsurface properties from observed data is identified with the solution of a so‐called “inverse problem.” In contrast, the “forward problem” consists of the determination of the data that would be recorded for a given subsurface configuration and under the assumption that given laws of physics hold. Until the early 1960s, geophysical inversion was carried out almost exclusively within the geophysicist’s brain. Since then, we have learned to make the geophysical inversion process much more quantitative and versatile by recourse to a growing body of theory, along with the computer power to reduce this theory to practice. We should point out the obvious, however, namely that no theory and no computer algorithm can presumably replace the ultimate arbiter who decides whether the results of an inversion make sense or nonsense: the geophysical interpreter. Perhaps our descendants writing a future third Millennium review article can report that a machine has been solving the inverse problem without a human arbiter. For the time being, however, what might be called “unsupervised geophysical inversion” remains but a dream.

2018 ◽  
Vol 26 (3) ◽  
pp. 423-452 ◽  
Author(s):  
H. Thomas Banks ◽  
Michele L. Joyner

AbstractIn this review we discuss methodology to ascertain the amount of information in given data sets with respect to determination of model parameters with desired levels of uncertainty. We do this in the context of least squares (ordinary, weighted, iterative reweighted weighted or “generalized”, etc.) based inverse problem formulations. The ideas are illustrated with several examples of interest in the biological and environmental sciences.


Author(s):  
Douglas L. Dorset

The quantitative use of electron diffraction intensity data for the determination of crystal structures represents the pioneering achievement in the electron crystallography of organic molecules, an effort largely begun by B. K. Vainshtein and his co-workers. However, despite numerous representative structure analyses yielding results consistent with X-ray determination, this entire effort was viewed with considerable mistrust by many crystallographers. This was no doubt due to the rather high crystallographic R-factors reported for some structures and, more importantly, the failure to convince many skeptics that the measured intensity data were adequate for ab initio structure determinations.We have recently demonstrated the utility of these data sets for structure analyses by direct phase determination based on the probabilistic estimate of three- and four-phase structure invariant sums. Examples include the structure of diketopiperazine using Vainshtein's 3D data, a similar 3D analysis of the room temperature structure of thiourea, and a zonal determination of the urea structure, the latter also based on data collected by the Moscow group.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
DianYu Liu ◽  
ChuanLe Sun ◽  
Jun Gao

Abstract The possible non-standard interactions (NSIs) of neutrinos with matter plays important role in the global determination of neutrino properties. In our study we select various data sets from LHC measurements at 13 TeV with integrated luminosities of 35 ∼ 139 fb−1, including production of a single jet, photon, W/Z boson, or charged lepton accompanied with large missing transverse momentum. We derive constraints on neutral-current NSIs with quarks imposed by different data sets in a framework of either effective operators or simplified Z′ models. We use theoretical predictions of productions induced by NSIs at next-to-leading order in QCD matched with parton showering which stabilize the theory predictions and result in more robust constraints. In a simplified Z′ model we obtain a 95% CLs upper limit on the conventional NSI strength ϵ of 0.042 and 0.0028 for a Z′ mass of 0.2 and 2 TeV respectively. We also discuss possible improvements from future runs of LHC with higher luminosities.


Analysis ◽  
2020 ◽  
Vol 40 (1) ◽  
pp. 39-45
Author(s):  
Yasser Khalili ◽  
Dumitru Baleanu

AbstractIn the present work, the interior spectral data is used to investigate the inverse problem for a diffusion operator with an impulse on the half line. We show that the potential functions {q_{0}(x)} and {q_{1}(x)} can be uniquely established by taking a set of values of the eigenfunctions at some internal point and one spectrum.


2004 ◽  
Vol 67 (9) ◽  
pp. 2024-2032 ◽  
Author(s):  
FUMIKO KASUGA ◽  
MASAMITSU HIROTA ◽  
MASAMICHI WADA ◽  
TOSHIHIKO YUNOKAWA ◽  
HAJIME TOYOFUKU ◽  
...  

The Ministry of Health, Labor and Welfare (former MHW) of Japan issued a Directive in 1997 advising restaurants and caterers to freeze portions of both raw food and cooked dishes for at least 2 weeks. This system has been useful for determining vehicle foods at outbreaks. Enumeration of bacteria in samples of stored food provide data about pathogen concentrations in the implicated food. Data on Salmonella concentrations in vehicle foods associated with salmonellosis outbreaks were collected in Japan between 1989 and 1998. The 39 outbreaks that occurred during this period were categorized by the settings where the outbreaks took place, and epidemiological data from each outbreak were summarized. Characteristics of outbreak groups were analyzed and compared. The effect of new food-storage system on determination of bacterial concentration was evaluated. Freezing and nonfreezing conditions prior to microbial examination were compared in the dose-response relationship. Data from outbreaks in which implicated foods had been kept frozen suggested apparent correlation between the Salmonella dose ingested and the disease rate. Combined with results of epidemiological investigation, quantitative data from the ingested pathogen could provide complete dose-response data sets.


2008 ◽  
Vol 44-46 ◽  
pp. 871-878 ◽  
Author(s):  
Chu Yang Luo ◽  
Jun Jiang Xiong ◽  
R.A. Shenoi

This paper outlines a new technique to address the paucity of data in determining fatigue life and performance based on reliability concepts. Two new randomized models are presented for estimating the safe life and pS-N curve, by using the standard procedure for statistical analysis and dealing with small sample numbers of incomplete data. The confidence level formulations for the safe and p-S-N curve are also given. The concepts are then applied for the determination of the safe life and p-S-N curve. Two sets of fatigue tests for the safe life and p-S-N curve are conducted to validate the presented method, demonstrating the practical use of the proposed technique.


2012 ◽  
Vol 21 (05) ◽  
pp. 1250048
Author(s):  
L. IORIO

We analytically work out the long-term orbital perturbations induced by the leading order of perturbing potential arising from the local modification of the Newton's inverse square law due to a topology ℝ2 × 𝕊1 with a compactified dimension of radius R recently proposed by Floratos and Leontaris. We neither restrict to any specific spatial direction [Formula: see text] for the asymmetry axis nor to particular orbital configurations of the test particle. Thus, our results are quite general. Nonvanishing long-term variations occur for all the usual osculating Keplerian orbital elements, apart from the semimajor axis which is left unaffected. By using recent improvements in the determination of the orbital motion of Saturn from Cassini data, we preliminarily inferred R ≳ 4-6 kau . As a complementary approach, the putative topological effects should be explicitly modeled and solved-for with a modified version of the ephemerides dynamical models with which the same data sets should be reprocessed.


Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. A9-A12 ◽  
Author(s):  
Kees Wapenaar ◽  
Joost van der Neut ◽  
Jan Thorbecke

Deblending of simultaneous-source data is usually considered to be an underdetermined inverse problem, which can be solved by an iterative procedure, assuming additional constraints like sparsity and coherency. By exploiting the fact that seismic data are spatially band-limited, deblending of densely sampled sources can be carried out as a direct inversion process without imposing these constraints. We applied the method with numerically modeled data and it suppressed the crosstalk well, when the blended data consisted of responses to adjacent, densely sampled sources.


Author(s):  
Sarasij Das ◽  
Nagendra Rao P S

This paper is the outcome of an attempt in mining recorded power system operational data in order to get new insight to practical power system behavior. Data mining, in general, is essentially finding new relations between data sets by analyzing well known or recorded data. In this effort we make use of the recorded data of the Southern regional grid of India. Some interesting relations at the total system level between frequency, total MW/MVAr generation, and average system voltage have been obtained. The aim of this work is to highlight the potential of data mining for power system applications and also some of the concerns that need to be addressed to make such efforts more useful.


Sign in / Sign up

Export Citation Format

Share Document