Determination of Velocity Model by using Nip-Wave tomographic Inversion: Application to synthetic and real Data

2018 ◽  
Author(s):  
A. Hendriyana
Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 766
Author(s):  
Rashad A. R. Bantan ◽  
Ramadan A. Zeineldin ◽  
Farrukh Jamal ◽  
Christophe Chesneau

Deanship of scientific research established by the King Abdulaziz University provides some research programs for its staff and researchers and encourages them to submit proposals in this regard. Distinct research study (DRS) is one of these programs. It is available all the year and the King Abdulaziz University (KAU) staff can submit more than one proposal at the same time up to three proposals. The rules of the DSR program are simple and easy so it contributes in increasing the international rank of KAU. The authors are offered financial and moral reward after publishing articles from these proposals in Thomson-ISI journals. In this paper, multiplayer perceptron (MLP) artificial neural network (ANN) is employed to determine the factors that have more effect on the number of ISI published articles. The proposed study used real data of the finished projects from 2011 to April 2019.


Geophysics ◽  
2021 ◽  
pp. 1-50
Author(s):  
German Garabito ◽  
José Silas dos Santos Silva ◽  
Williams Lima

In land seismic data processing, the prestack time migration (PSTM) image remains the standard imaging output, but a reliable migrated image of the subsurface depends on the accuracy of the migration velocity model. We have adopted two new algorithms for time-domain migration velocity analysis based on wavefield attributes of the common-reflection-surface (CRS) stack method. These attributes, extracted from multicoverage data, were successfully applied to build the velocity model in the depth domain through tomographic inversion of the normal-incidence-point (NIP) wave. However, there is no practical and reliable method for determining an accurate and geologically consistent time-migration velocity model from these CRS attributes. We introduce an interactive method to determine the migration velocity model in the time domain based on the application of NIP wave attributes and the CRS stacking operator for diffractions, to generate synthetic diffractions on the reflection events of the zero-offset (ZO) CRS stacked section. In the ZO data with diffractions, the poststack time migration (post-STM) is applied with a set of constant velocities, and the migration velocities are then selected through a focusing analysis of the simulated diffractions. We also introduce an algorithm to automatically calculate the migration velocity model from the CRS attributes picked for the main reflection events in the ZO data. We determine the precision of our diffraction focusing velocity analysis and the automatic velocity calculation algorithms using two synthetic models. We also applied them to real 2D land data with low quality and low fold to estimate the time-domain migration velocity model. The velocity models obtained through our methods were validated by applying them in the Kirchhoff PSTM of real data, in which the velocity model from the diffraction focusing analysis provided significant improvements in the quality of the migrated image compared to the legacy image and to the migrated image obtained using the automatically calculated velocity model.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


Author(s):  
Augusto César de Mendonça Brasil

This chapter presents in a consolidated manner the step-by-step methodology to estimate the electrical energy potential of industrial wood residues considering the dependency of the efficiency of the power plants with their size. A function of the overall efficiency with power was obtained from a best curve fit of real data both taken from the literature and from Brazilian biomass-fired power plants. The methodology was applied to the determination of the electrical energy potential of wood industry residues in the State of Pará (data collected in 2004). Two cases were analyzed: one where a constant electrical efficiency of 25% was considered (independently of the amount of residues generated) and another where the proposed function of efficiency with power was used. Results show that in the State of Pará, the existent 675 sawmills generated 2.95 × 106 t in dry basis. When the dependency of efficiency with plant size is not considered, the electrical energy potential and average installed power (3140.4 GWh and 2 MWe) are overestimated in comparison to the herein proposed methodology (1868.8 GWh and 1 MWe). The present methodology, considering the efficiency as a function of the power, results in an average efficiency of 12.3% (lower than 25%).


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1260
Author(s):  
Jose M. Calabuig ◽  
Luis M. García-Raffi ◽  
Albert García-Valiente ◽  
Enrique A. Sánchez-Pérez

We show a simple model of the dynamics of a viral process based, on the determination of the Kaplan-Meier curve P of the virus. Together with the function of the newly infected individuals I, this model allows us to predict the evolution of the resulting epidemic process in terms of the number E of the death patients plus individuals who have overcome the disease. Our model has as a starting point the representation of E as the convolution of I and P. It allows introducing information about latent patients—patients who have already been cured but are still potentially infectious, and re-infected individuals. We also provide three methods for the estimation of P using real data, all of them based on the minimization of the quadratic error: the exact solution using the associated Lagrangian function and Karush-Kuhn-Tucker conditions, a Monte Carlo computational scheme acting on the total set of local minima, and a genetic algorithm for the approximation of the global minima. Although the calculation of the exact solutions of all the linear systems provided by the use of the Lagrangian naturally gives the best optimization result, the huge number of such systems that appear when the time variable increases makes it necessary to use numerical methods. We have chosen the genetic algorithms. Indeed, we show that the results obtained in this way provide good solutions for the model.


Author(s):  
Khayra Bencherif ◽  
Mimoun Malki ◽  
Djamel Amar Bensaber

This article describes how the Linked Open Data Cloud project allows data providers to publish structured data on the web according to the Linked Data principles. In this context, several link discovery frameworks have been developed for connecting entities contained in knowledge bases. In order to achieve a high effectiveness for the link discovery task, a suitable link configuration is required to specify the similarity conditions. Unfortunately, such configurations are specified manually; which makes the link discovery task tedious and more difficult for the users. In this article, the authors address this drawback by proposing a novel approach for the automatic determination of link specifications. The proposed approach is based on a neural network model to combine a set of existing metrics into a compound one. The authors evaluate the effectiveness of the proposed approach in three experiments using real data sets from the LOD Cloud. In addition, the proposed approach is compared against link specifications approaches to show that it outperforms them in most experiments.


Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. R107-R124 ◽  
Author(s):  
Yaser Gholami ◽  
Romain Brossier ◽  
Stéphane Operto ◽  
Vincent Prieux ◽  
Alessandra Ribodetti ◽  
...  

It is necessary to account for anisotropy in full waveform inversion (FWI) of wide-azimuth and wide-aperture seismic data in most geologic environments, for correct depth positioning of reflectors, and for reliable estimations of wave speeds as a function of the direction of propagation. In this framework, choosing a suitable anisotropic subsurface parameterization is a central issue in monoparameter and multiparameter FWI. This is because this parameterization defines the influence of each physical parameter class on the data as a function of the scattering angle, and hence the resolution of the parameter reconstruction, and on the potential trade-off between different parameter classes. We apply monoparameter and multiparameter frequency-domain acoustic vertical transverse isotropic FWI to synthetic and real wide-aperture data, representative of the Valhall oil field. We first show that reliable monoparameter FWI can be performed to build a high-resolution velocity model (for the vertical, the horizontal, or normal move-out velocity), provided that the background models of two Thomsen parameters describe the long wavelengths of the subsurface sufficiently accurately. Alternatively, we show the feasibility of the joint reconstruction of two wave speeds (e.g., the vertical and horizontal wave speeds) with limited trade-off effects, while Thomsen parameter [Formula: see text] is kept fixed during the inversion. The influence of the wave speeds on the data for a limited range of scattering angles when combined each other can, however, significantly hamper the resolution with which the two wave speeds are imaged. These conclusions inferred from the application to the real data are fully consistent with those inferred from the theoretical parameterization analysis of acoustic vertical transverse isotropic FWI performed in the companion report.


Sign in / Sign up

Export Citation Format

Share Document