Unsupervised physics-based neural networks for seismic migration

2019 ◽  
Vol 7 (3) ◽  
pp. SE189-SE200 ◽  
Author(s):  
Janaki Vamaraju ◽  
Mrinal K. Sen

We have developed a novel framework for combining physics-based forward models and neural networks to advance seismic processing and inversion algorithms. Migration is an effective tool in seismic data processing and imaging. Over the years, the scope of these algorithms has broadened; today, migration is a central step in the seismic data processing workflow. However, no single migration technique is suitable for all kinds of data and all styles of acquisition. There is always a compromise on the accuracy, cost, and flexibility of these algorithms. On the other hand, machine-learning algorithms and artificial intelligence methods have been found immensely successful in applications in which big data are available. The applicability of these algorithms is being extensively investigated in scientific disciplines such as exploration geophysics with the goal of reducing exploration and development costs. In this context, we have used a special kind of unsupervised recurrent neural network and its variants, Hopfield neural networks and the Boltzmann machine, to solve the problems of Kirchhoff and reverse time migrations. We use the network to migrate seismic data in a least-squares sense using simulated annealing to globally optimize the cost function of the neural network. The weights and biases of the neural network are derived from the physics-based forward models that are used to generate seismic data. The optimal configuration of the neural network after training corresponds to the minimum energy of the network and thus gives the reflectivity solution of the migration problem. Using synthetic examples, we determine that (1) Hopfield neural networks are fast and efficient and (2) they provide reflectivity images with mitigated migration artifacts and improved spatial resolution. Specifically, the presented approach minimizes the artifacts that arise from limited aperture, low subsurface illumination, coarse sampling, and gaps in the data.

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. U87-U98
Author(s):  
Jing Zheng ◽  
Jerry M. Harris ◽  
Dongzhuo Li ◽  
Badr Al-Rumaih

It is important to autopick an event’s arrival time and classify the corresponding phase for seismic data processing. Traditional arrival-time picking algorithms usually need 3C seismograms to classify event phase. However, a large number of borehole seismic data sets are recorded by arrays of hydrophones or distributed acoustic sensing elements whose sensors are 1C and cannot be analyzed for particle motion or phase polarization. With the development of deep learning techniques, researchers have tried data mining with the convolutional neural network (CNN) for seismic phase autopicking. In the previous work, CNN was applied to process 3C seismograms to detect phase and pick arrivals. We have extended this work to process 1C seismic data and focused on two main points. One is the effect of the label vector on the phase detection performance. The other is to propose an architecture to deal with the challenge from the insufficiency of training data in the coverage of different scenarios of [Formula: see text] ratios. Two novel points are summarized after this analysis. First, the width of the label vector can be designed through signal time-frequency analysis. Second, a combination of CNN and recurrent neural network architecture is more suitable for designing a P- and S-phase detector to deal with the challenge from the insufficiency of training data for 1C recordings in time-lapse seismic monitoring. We perform experiments and analysis using synthetic and field time-lapse seismic recordings. The experiments show that it is effective for 1C seismic data processing in time-lapse monitoring surveys.


2021 ◽  
Vol 16 (93) ◽  
pp. 21-37
Author(s):  
Yuriy N. Lavrenkov ◽  

We consider the synthesis of a hybrid neural convolutional network with the modular topology-based architecture, which allows to arrange a parallel convolutional computing system to combine both the energy transfer and data processing, in order to simulate complex functions of natural biological neural populations. The system of interlayer neural commutation, based on the distributed resonance circuits with the layers of electromagnetic metamaterial between the inductive elements, is a base for simulation of the interaction between the astrocyte networks and the neural clusters responsible for information processing. Consequently, the data processing is considered both at the level of signal transmission through neural elements, and as interaction of artificial neurons and astrocytic networks ensuring their functioning. The resulting two-level neural system of data processing implements a set of measures to solve the issue based on the neural network committee. The specific arrangement of the neural network enables us to implement and configure the educational procedure using the properties absent in the neural networks consisting of neural populations only. The training of the convolutional network is based on a preliminary analysis of rhythmic activity, where artificial astrocytes play the main role of interneural switches. The analysis of the signals moving through the neural network enables us to adjust variable components to present information from training bunches in the available memory circuits in the most efficient way. Moreover, in the training process we observe the activity of neurons in various areas to evenly distribute the computational load on neural network modules to achieve maximum performance. The trained and formed convolutional network is used to solve the problem of determining the optimal path for the object moving due to the energy from the environment


2020 ◽  
Vol 221 (2) ◽  
pp. 1211-1225 ◽  
Author(s):  
Y X Zhao ◽  
Y Li ◽  
B J Yang

SUMMARY One of the difficulties in desert seismic data processing is the large spectral overlap between noise and reflected signals. Existing denoising algorithms usually have a negative impact on the resolution and fidelity of seismic data when denoising, which is not conducive to the acquisition of underground structures and lithology related information. Aiming at this problem, we combine traditional method with deep learning, and propose a new feature extraction and denoising strategy based on a convolutional neural network, namely VMDCNN. In addition, we also build a training set using field seismic data and synthetic seismic data to optimize network parameters. The processing results of synthetic seismic records and field seismic records show that the proposed method can effectively suppress the noise that shares the same frequency band with the reflected signals, and the reflected signals have almost no energy loss. The processing results meet the requirements of high signal-to-noise ratio, high resolution and high fidelity for seismic data processing.


2021 ◽  
Author(s):  
Bhadra S Kumar ◽  
Nagavarshini Mayakkannan ◽  
N Sowmya Manojna ◽  
V. Srinivasa Chakravarthy

AbstractArtificial feedforward neural networks perform a wide variety of classification and function approximation tasks with high accuracy. Unlike their artificial counterparts, biological neural networks require a supply of adequate energy delivered to single neurons by a network of cerebral microvessels. Since energy is a limited resource, a natural question is whether the cerebrovascular network is capable of ensuring maximum performance of the neural network while consuming minimum energy? Should the cerebrovascular network also be trained, along with the neural network, to achieve such an optimum?In order to answer the above questions in a simplified modeling setting, we constructed an Artificial Neurovascular Network (ANVN) comprising a multilayered perceptron (MLP) connected to a vascular tree structure. The root node of the vascular tree structure is connected to an energy source, and the terminal nodes of the vascular tree supply energy to the hidden neurons of the MLP. The energy delivered by the terminal vascular nodes to the hidden neurons determines the biases of the hidden neurons. The “weights” on the branches of the vascular tree depict the energy distribution from the parent node to the child nodes. The vascular weights are updated by a kind of “backpropagation” of the energy demand error generated by the hidden neurons.We observed that higher performance was achieved at lower energy levels when the vascular network was also trained along with the neural network. This indicates that the vascular network needs to be trained to ensure efficient neural performance. We observed that below a certain network size, the energetic dynamics of the network in the per capita energy consumption vs. classification accuracy space approaches a fixed-point attractor for various initial conditions. Once the number of hidden neurons increases beyond a threshold, the fixed point appears to vanish, giving place to a line of attractors. The model also showed that when there is a limited resource, the energy consumption of neurons is strongly correlated to their individual contribution to the network’s performance.Author summaryThe limited availability of resources contributed to a significant role in shaping evolution. The brain is also no different. It is known to have tremendous computing power at a significantly lower cost than artificial computing systems. The artificial neural networks aim typically at minimizing output error and maximizing accuracy. A biological network like the brain has an added constraint of energy availability, which might force it to choose an optimal solution that provides the best possible accuracy while consuming minimum energy. The intricate vascular network which ensures adequate energy to the brain might be a systematically trained layout rather than a hard-wired anatomical structure. Through this work, we intend to explore how the artificial neural network would behave if it were made dependent on an energy supply network and how the training of the energy supply network would influence the performance of the neural network. Our model concluded that training of a vascular energy network is highly desirable, and when the size of the neural network is small, the energy consumed by each neuron is a direct readout on its contribution to the network performance.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

EEG analysis aims to help scientists better understand the brain, help physicians diagnose and treatment choices of the brain-computer interface. Artificial neural networks are among the most effective learning algorithms to perform computing tasks similar to biological neurons in the human brain. In some problems, the neural network model's performance might significantly degrade and overfit due to some irrelevant features that negatively influence the model performance. Swarm optimization algorithms are robust techniques that can be implemented to find optimal solutions to such problems. In this paper, Grey Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO) algorithms are applied for the feature selection and the training of a Feed-forward Neural Network (FFNN). The performance of the FFNN in terms of test accuracy, precision, recall, and F1_score is investigated. Furthermore, this research has implemented other five machine learning algorithms for the purpose of comparison. Experimental results prove that the neural network model outperforms all other algorithms via GWO.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Author(s):  
Saša Vasiljević ◽  
Jasna Glišović ◽  
Nadica Stojanović ◽  
Ivan Grujić

According to the World Health Organization, air pollution with PM10 and PM2.5 (PM-particulate matter) is a significant problem that can have serious consequences for human health. Vehicles, as one of the main sources of PM10 and PM2.5 emissions, pollute the air and the environment both by creating particles by burning fuel in the engine, and by wearing of various elements in some vehicle systems. In this paper, the authors conducted the prediction of the formation of PM10 and PM2.5 particles generated by the wear of the braking system using a neural network (Artificial Neural Networks (ANN)). In this case, the neural network model was created based on the generated particles that were measured experimentally, while the validity of the created neural network was checked by means of a comparative analysis of the experimentally measured amount of particles and the prediction results. The experimental results were obtained by testing on an inertial braking dynamometer, where braking was performed in several modes, that is under different braking parameters (simulated vehicle speed, brake system pressure, temperature, braking time, braking torque). During braking, the concentration of PM10 and PM2.5 particles was measured simultaneously. The total of 196 measurements were performed and these data were used for training, validation, and verification of the neural network. When it comes to simulation, a comparison of two types of neural networks was performed with one output and with two outputs. For each type, network training was conducted using three different algorithms of backpropagation methods. For each neural network, a comparison of the obtained experimental and simulation results was performed. More accurate prediction results were obtained by the single-output neural network for both particulate sizes, while the smallest error was found in the case of a trained neural network using the Levenberg-Marquardt backward propagation algorithm. The aim of creating such a prediction model is to prove that by using neural networks it is possible to predict the emission of particles generated by brake wear, which can be further used for modern traffic systems such as traffic control. In addition, this wear algorithm could be applied on other vehicle systems, such as a clutch or tires.


Sign in / Sign up

Export Citation Format

Share Document