Model-based redatuming of seismic data: An inverse-filter approach

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. Q1-Q13 ◽  
Author(s):  
Thomas Planès ◽  
Roel Snieder ◽  
Satyan Singh

Standard model-based redatuming techniques allow focusing of the direct waves at the new datum, but the focus can be degraded because of surface multiples and internal multiples in the overburden. We demonstrate that if the medium above the redatuming level is known, these multiples can be correctly handled. We compute the exact focusing functions, free of multiples, using an inverse-filter approach. These focusing functions create downward-radiating and upward-radiating virtual sources at the new datum. The surface responses to these virtual sources are then used to compute the objective redatumed data set through multidimensional deconvolution. The redatumed data set corresponds to a virtual acquisition made at the new datum and for which the imprint of the overburden is completely removed. We test the technique on 2D acoustic synthetic examples corresponding to a seismic context and an acoustic nondestructive testing context.

Geophysics ◽  
1984 ◽  
Vol 49 (8) ◽  
pp. 1369-1373 ◽  
Author(s):  
Q. Bristow ◽  
J. G. Conaway ◽  
P. G. Killeen

The application of digital inverse filter deconvolution techniques to seismic data has been routine for many years. More recently these techniques have been extended to natural gamma‐ray logging in order to improve the spatial resolution of the recorded logs (Czubek, 1971; Conaway and Killeen, 1978a,b). Early work in this field (Scott, 1963) involved an iterative procedure which required repeated processing of an entire log data set. Such a technique does not lend itself to continuous on‐line deconvolution of a log while the logging operation is in progress. The inverse digital filter approach, by contrast, is particularly well suited for implementation in a computer‐based borehole logging data acquisition system. Such a system had been developed at the Geological Survey of Canada (G.S.C.) by 1978 and was described by Bristow and Killeen (1978) and Bristow (1977, 1979).


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2015 ◽  
Vol 80 (5) ◽  
pp. B115-B129 ◽  
Author(s):  
Rie Kamei ◽  
Takayuki Miyoshi ◽  
R. Gerhard Pratt ◽  
Mamoru Takanashi ◽  
Shogo Masaya

2013 ◽  
Vol 17 (7) ◽  
pp. 2781-2796 ◽  
Author(s):  
S. Shukla ◽  
J. Sheffield ◽  
E. F. Wood ◽  
D. P. Lettenmaier

Abstract. Global seasonal hydrologic prediction is crucial to mitigating the impacts of droughts and floods, especially in the developing world. Hydrologic predictability at seasonal lead times (i.e., 1–6 months) comes from knowledge of initial hydrologic conditions (IHCs) and seasonal climate forecast skill (FS). In this study we quantify the contributions of two primary components of IHCs – soil moisture and snow water content – and FS (of precipitation and temperature) to seasonal hydrologic predictability globally on a relative basis throughout the year. We do so by conducting two model-based experiments using the variable infiltration capacity (VIC) macroscale hydrology model, one based on ensemble streamflow prediction (ESP) and another based on Reverse-ESP (Rev-ESP), both for a 47 yr re-forecast period (1961–2007). We compare cumulative runoff (CR), soil moisture (SM) and snow water equivalent (SWE) forecasts from each experiment with a VIC model-based reference data set (generated using observed atmospheric forcings) and estimate the ratio of root mean square error (RMSE) of both experiments for each forecast initialization date and lead time, to determine the relative contribution of IHCs and FS to the seasonal hydrologic predictability. We find that in general, the contributions of IHCs to seasonal hydrologic predictability is highest in the arid and snow-dominated climate (high latitude) regions of the Northern Hemisphere during forecast periods starting on 1 January and 1 October. In mid-latitude regions, such as the Western US, the influence of IHCs is greatest during the forecast period starting on 1 April. In the arid and warm temperate dry winter regions of the Southern Hemisphere, the IHCs dominate during forecast periods starting on 1 April and 1 July. In equatorial humid and monsoonal climate regions, the contribution of FS is generally higher than IHCs through most of the year. Based on our findings, we argue that despite the limited FS (mainly for precipitation) better estimates of the IHCs could lead to improvement in the current level of seasonal hydrologic forecast skill over many regions of the globe at least during some parts of the year.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. C57-C74 ◽  
Author(s):  
Abdulrahman A. Alshuhail ◽  
Dirk J. Verschuur

Because the earth is predominately anisotropic, the anisotropy of the medium needs to be included in seismic imaging to avoid mispositioning of reflectors and unfocused images. Deriving accurate anisotropic velocities from the seismic reflection measurements is a highly nonlinear and ambiguous process. To mitigate the nonlinearity and trade-offs between parameters, we have included anisotropy in the so-called joint migration inversion (JMI) method, in which we limit ourselves to the case of transverse isotropy with a vertical symmetry axis. The JMI method is based on strictly separating the scattering effects in the data from the propagation effects. The scattering information is encoded in the reflectivity operators, whereas the phase information is encoded in the propagation operators. This strict separation enables the method to be more robust, in that it can appropriately handle a wide range of starting models, even when the differences in traveltimes are more than a half cycle away. The method also uses internal multiples in estimating reflectivities and anisotropic velocities. Including internal multiples in inversion not only reduces the crosstalk in the final image, but it can also reduce the trade-off between the anisotropic parameters because internal multiples usually have more of an imprint of the subsurface parameters compared with primaries. The inverse problem is parameterized in terms of a reflectivity, vertical velocity, horizontal velocity, and a fixed [Formula: see text] value. The method is demonstrated on several synthetic models and a marine data set from the North Sea. Our results indicate that using JMI for anisotropic inversion makes the inversion robust in terms of using highly erroneous initial models. Moreover, internal multiples can contain valuable information on the subsurface parameters, which can help to reduce the trade-off between anisotropic parameters in inversion.


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 642 ◽  
Author(s):  
Erlandson Saraiva ◽  
Adriano Suzuki ◽  
Luis Milan

In this paper, we study the performance of Bayesian computational methods to estimate the parameters of a bivariate survival model based on the Ali–Mikhail–Haq copula with marginal distributions given by Weibull distributions. The estimation procedure was based on Monte Carlo Markov Chain (MCMC) algorithms. We present three version of the Metropolis–Hastings algorithm: Independent Metropolis–Hastings (IMH), Random Walk Metropolis (RWM) and Metropolis–Hastings with a natural-candidate generating density (MH). Since the creation of a good candidate generating density in IMH and RWM may be difficult, we also describe how to update a parameter of interest using the slice sampling (SS) method. A simulation study was carried out to compare the performances of the IMH, RWM and SS. A comparison was made using the sample root mean square error as an indicator of performance. Results obtained from the simulations show that the SS algorithm is an effective alternative to the IMH and RWM methods when simulating values from the posterior distribution, especially for small sample sizes. We also applied these methods to a real data set.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document