Coupling direct inversion to common-shot image-domain velocity analysis

Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. R497-R514 ◽  
Author(s):  
Yubing Li ◽  
Hervé Chauris

Migration velocity analysis is a technique used to estimate the large-scale structure of the subsurface velocity model controlling the kinematics of wave propagation. For more stable results, recent studies have proposed to replace migration, adjoint of Born modeling, by the direct inverse of the modeling operator in the context of extended subsurface-offset domain. Following the same strategy, we have developed a two-way-wave-equation-based inversion velocity analysis (IVA) approach for the original surface-oriented shot gathers. We use the differential semblance optimization (DSO) objective function to evaluate the quality of inverted images depending on shot positions and to derive the associated gradient, an essential element to update the macromodel. We evaluate the advantages and limitations through applications of 2D synthetic data sets, first on simple models with a single-reflector embedded in various background velocities and then on the Marmousi model. The direct inverse attenuates migration smiles by compensating for geometric spreading and uneven illuminations. We slightly modified the original DSO objective function to remove spurious oscillations around interface positions in the velocity gradient. These oscillations are related to the fact that the locations of events in the image domain depend on the macromodel. We pay attention to the presence of triplicated wavefields. It appears that IVA is robust even if artifacts are observed in the seismic migrated section. The velocity gradient leads to a stable update, especially after a Gaussian smoothing over a wavelength distance. Coupling common-shot direct inversion to velocity analysis offers new possibilities for the extension to 3D in the future.

Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S207-S223 ◽  
Author(s):  
Hervé Chauris ◽  
Emmanuel Cocher

Migration velocity analysis (MVA) is a technique defined in the image domain to determine the background velocity model controlling the kinematics of wave propagation. In the presence of discontinuous interfaces, the velocity gradient used to iteratively update the velocity model exhibits spurious oscillations. For more stable results, we replace the migration part by an inversion scheme. By definition, migration is the adjoint of the Born modeling operator, whereas inversion is its asymptotic inverse. We have developed new expressions in 1D and 2D cases based on two-way wave-equation operators. The objective function measures the quality of the images obtained by inversion in the extended domain depending on the subsurface offset. In terms of implementation, the new approach is very similar to classic MVA. A 1D analysis found that oscillatory terms around the interface positions can be removed by multiplying the inversion result with the velocity at a specific power before evaluating the objective function. Several 2D synthetic data sets are discussed through the computation of the gradient needed to update the model parameters. Even for discontinuous reflectivity models, the new approach provides results without artificial oscillations. The model update corresponds to a gradient of an existing objective function, which was not the case for the horizontal contraction approach proposed as an alternative to deal with gradient artifacts. It also correctly handles low-velocity anomalies, contrary to the horizontal contraction approach. Inversion velocity analysis offers new perspectives for the applicability of image-domain velocity analysis.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. R475-R495
Author(s):  
Emmanuel Cocher ◽  
Hervé Chauris ◽  
René-Édouard Plessix

Migration velocity analysis is a family of methods aiming at automatically recovering large-scale trends of the velocity model from primary reflection data. We studied an image-domain version, in which the model is extended with the subsurface offset and we use the differential semblance optimization objective function. To incorporate first-order surface multiples in this method, the standard migration step is replaced by a least-squares iterative scheme aiming at determining an extended reflectivity model explaining primaries and multiples. Hence, this iterative migration velocity analysis strategy takes the form of a nested optimization problem, with gradient-based minimization techniques for the inner (migration part) and outer loops (macromodel estimation). The behavior of the outer loop gradient is unstable, depending on the number of iterations of the inner loop. This problem is addressed by slightly modifying the outer loop objective function: A “filter” operator attenuating unwanted energy in the extended reflectivity is applied before evaluating the focusing of reflectivity images. Simple synthetic numerical examples illustrate that this modification improves the stability of the gradient. In addition, a less expensive outer gradient computation is proposed, without harming the background velocity updates.


Geophysics ◽  
2008 ◽  
Vol 73 (6) ◽  
pp. S241-S249 ◽  
Author(s):  
Xiao-Bi Xie ◽  
Hui Yang

We have derived a broadband sensitivity kernel that relates the residual moveout (RMO) in prestack depth migration (PSDM) to velocity perturbations in the migration-velocity model. We have compared the kernel with the RMO directly measured from the migration image. The consistency between the sensitivity kernel and the measured sensitivity map validates the theory and the numerical implementation. Based on this broadband sensitivity kernel, we propose a new tomography method for migration-velocity analysis and updating — specifically, for the shot-record PSDM and shot-index common-image gather. As a result, time-consuming angle-domain analysis is not required. We use a fast one-way propagator and multiple forward scattering and single backscattering approximations to calculate the sensitivity kernel. Using synthetic data sets, we can successfully invert velocity perturbations from the migration RMO. This wave-equation-based method naturally incorporates the wave phenomena and is best teamed with the wave-equation migration method for velocity analysis. In addition, the new method maintains the simplicity of the ray-based velocity analysis method, with the more accurate sensitivity kernels replacing the rays.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


2021 ◽  
Author(s):  
Andrew J Kavran ◽  
Aaron Clauset

Abstract Background: Large-scale biological data sets are often contaminated by noise, which can impede accurate inferences about underlying processes. Such measurement noise can arise from endogenous biological factors like cell cycle and life history variation, and from exogenous technical factors like sample preparation and instrument variation.Results: We describe a general method for automatically reducing noise in large-scale biological data sets. This method uses an interaction network to identify groups of correlated or anti-correlated measurements that can be combined or “filtered” to better recover an underlying biological signal. Similar to the process of denoising an image, a single network filter may be applied to an entire system, or the system may be first decomposed into distinct modules and a different filter applied to each. Applied to synthetic data with known network structure and signal, network filters accurately reduce noise across a wide range of noise levels and structures. Applied to a machine learning task of predicting changes in human protein expression in healthy and cancerous tissues, network filtering prior to training increases accuracy up to 43% compared to using unfiltered data.Conclusions: Network filters are a general way to denoise biological data and can account for both correlation and anti-correlation between different measurements. Furthermore, we find that partitioning a network prior to filtering can significantly reduce errors in networks with heterogenous data and correlation patterns, and this approach outperforms existing diffusion based methods. Our results on proteomics data indicate the broad potential utility of network filters to applications in systems biology.


Geophysics ◽  
2021 ◽  
pp. 1-35
Author(s):  
M. Javad Khoshnavaz

Building an accurate velocity model plays a vital role in routine seismic imaging workflows. Normal-moveout-based seismic velocity analysis is a popular method to make the velocity models. However, traditional velocity analysis methodologies are not generally capable of handling amplitude variations across moveout curves, specifically polarity reversals caused by amplitude-versus-offset anomalies. I present a normal-moveout-based velocity analysis approach that circumvents this shortcoming by modifying the conventional semblance function to include polarity and amplitude correction terms computed using correlation coefficients of seismic traces in the velocity analysis scanning window with a reference trace. Thus, the proposed workflow is suitable for any class of amplitude-versus-offset effects. The approach is demonstrated to four synthetic data examples of different conditions and a field data consisting a common-midpoint gather. Lateral resolution enhancement using the proposed workflow is evaluated by comparison between the results from the workflow and the results obtained by the application of conventional semblance and three semblance-based velocity analysis algorithms developed to circumvent the challenges associated with amplitude variations across moveout curves, caused by seismic attenuation and class II amplitude-versus-offset anomalies. According to the obtained results, the proposed workflow is superior to all the presented workflows in handling such anomalies.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3158
Author(s):  
Jian Yang ◽  
Xiaojuan Ban ◽  
Chunxiao Xing

With the rapid development of mobile networks and smart terminals, mobile crowdsourcing has aroused the interest of relevant scholars and industries. In this paper, we propose a new solution to the problem of user selection in mobile crowdsourcing system. The existing user selection schemes mainly include: (1) find a subset of users to maximize crowdsourcing quality under a given budget constraint; (2) find a subset of users to minimize cost while meeting minimum crowdsourcing quality requirement. However, these solutions have deficiencies in selecting users to maximize the quality of service of the task and minimize costs. Inspired by the marginalism principle in economics, we wish to select a new user only when the marginal gain of the newly joined user is higher than the cost of payment and the marginal cost associated with integration. We modeled the scheme as a marginalism problem of mobile crowdsourcing user selection (MCUS-marginalism). We rigorously prove the MCUS-marginalism problem to be NP-hard, and propose a greedy random adaptive procedure with annealing randomness (GRASP-AR) to achieve maximize the gain and minimize the cost of the task. The effectiveness and efficiency of our proposed approaches are clearly verified by a large scale of experimental evaluations on both real-world and synthetic data sets.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Sign in / Sign up

Export Citation Format

Share Document