A Boltzmann machine for high-resolution prestack seismic inversion

2019 ◽  
Vol 7 (3) ◽  
pp. SE215-SE224 ◽  
Author(s):  
Son Dang Thai Phan ◽  
Mrinal K. Sen

Seismic inversion is one popular approach that aims at predicting some indicative properties to support the geologic interpretation process. Existing inversion techniques indicate weaknesses when dealing with complex geologic area, where the uncertainties arise from the guiding model, which are provided by the interpreters. We have developed a prestack seismic inversion algorithm using a machine-learning algorithm called the Boltzmann machine. Unlike common inversion approaches, this stochastic neural network does not require a starting model at the beginning of the process to guide the solution; however, low-frequency models are required to convert the inversion-derived reflectivity terms to the absolute elastic P- and S-impedance as well as density. Our algorithm incorporates a single-layer Hopfield neural network whose neurons can be treated as the desired reflectivity terms. The optimization process seeks the global minimum solution by combining the network with a stochastic model update from the mean-field annealing algorithm. Also, we use a Z-shaped sample sorting scheme and the first-order Tikhonov regularization to improve the lateral continuity of the results and to stabilize the inversion process. The algorithm is applied to a field 2D data set to invert for high-resolution indicative P- and S-impedance sections to better capture some features away from the reservoir zone. The resulting models are strongly supported by the well results and reveal some realistic features that are not clearly displayed in the model-based deterministic inversion result. In combination with well-log analyses, the new features appear to be a good prospect for hydrocarbon.

Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Author(s):  
T. Zh. Mazakov ◽  
D. N. Narynbekovna

Now a day’s security is a big issue, the whole world has been working on the face recognition techniques as face is used for the extraction of facial features. An analysis has been done of the commonly used face recognition techniques. This paper presents a system for the recognition of face for identification and verification purposes by using Principal Component Analysis (PCA) with Back Propagation Neural Networks (BPNN) and the implementation of face recognition system is done by using neural network. The use of neural network is to produce an output pattern from input pattern. This system for facial recognition is implemented in MATLAB using neural networks toolbox. Back propagation Neural Network is multi-layered network in which weights are fixed but adjustment of weights can be done on the basis of sigmoidal function. This algorithm is a learning algorithm to train input and output data set. It also calculates how the error changes when weights are increased or decreased. This paper consists of background and future perspective of face recognition techniques and how these techniques can be improved.


2021 ◽  
Vol 502 (3) ◽  
pp. 3200-3209
Author(s):  
Young-Soo Jo ◽  
Yeon-Ju Choi ◽  
Min-Gi Kim ◽  
Chang-Ho Woo ◽  
Kyoung-Wook Min ◽  
...  

ABSTRACT We constructed a far-ultraviolet (FUV) all-sky map based on observations from the Far Ultraviolet Imaging Spectrograph (FIMS) aboard the Korean microsatellite Science and Technology SATellite-1. For the ${\sim}20{{\ \rm per\ cent}}$ of the sky not covered by FIMS observations, predictions from a deep artificial neural network were used. Seven data sets were chosen for input parameters, including five all-sky maps of H α, E(B − V), N(H i), and two X-ray bands, with Galactic longitudes and latitudes. 70 ${{\ \rm per\ cent}}$ of the pixels of the observed FIMS data set were randomly selected for training as target parameters and the remaining 30 ${{\ \rm per\ cent}}$ were used for validation. A simple four-layer neural network architecture, which consisted of three convolution layers and a dense layer at the end, was adopted, with an individual activation function for each convolution layer; each convolution layer was followed by a dropout layer. The predicted FUV intensities exhibited good agreement with Galaxy Evolution Explorer observations made in a similar FUV wavelength band for high Galactic latitudes. As a sample application of the constructed map, a dust scattering simulation was conducted with model optical parameters and a Galactic dust model for a region that included observed and predicted pixels. Overall, FUV intensities in the observed and predicted regions were reproduced well.


2019 ◽  
Vol 19 (4) ◽  
pp. 1003-1016 ◽  
Author(s):  
Yasamin Keshmiri Esfandabadi ◽  
Maxime Bilodeau ◽  
Patrice Masson ◽  
Luca De Marchi

Ultrasonic wavefield imaging with a non-contact technology can provide detailed information about the health status of an inspected structure. However, high spatial resolution, often necessary for accurate damage quantification, typically demands a long scanning time. In this work, we investigate a novel methodology to acquire high-resolution wavefields with a reduced number of measurement points to minimize the acquisition time. Such methodology is based on the combination of compressive sensing and convolutional neural networks to recover high spatial frequency information from low-resolution images. A data set was built from 652 wavefield images acquired with a laser Doppler vibrometer describing guided ultrasonic wave propagation in eight different structures, with and without various simulated defects. Out of those 652 images, 326 cases without defect and 326 cases with defect were used as a training database for the convolutional neural network. In addition, 273 wavefield images were used as a testing database to validate the proposed methodology. For quantitative evaluation, two image quality metrics were calculated and compared to those achieved with different recovery methods or by training the convolutional neural network with non-wavefield images data set. The results demonstrate the capability of the technique for enhancing image resolution and quality, as well as similarity to the wavefield acquired on the full high-resolution grid of scan points, while reducing the number of measurement points down to 10% of the number of scan points for a full grid.


2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


Author(s):  
Yasser Khan

Telecommunication customer churn is considered as major cause for dropped revenue and customer baseline of voice, multimedia and broadband service provider. There is strong need on focusing to understand the contributory factors of churn. Now considering factors from data sets obtained from Pakistan major telecom operators are applied for modeling. On the basis of results obtained from the optimal techniques, comparative technical evaluation is carried out. This research study is comprised mainly of proposition of conceptual frame work for telecom customer churn that lead to creation of predictive model. This is trained tested and evaluated on given data set taken from Pakistan Telecom industry that has provided accurate & reliable outcomes. Out of four prevailing statistical and machine learning algorithm, artificial neural network is declared the most reliable model, followed by decision tree. The logistic regression is placed at last position by considering the performance metrics like accuracy, recall, precision and ROC curve. The results from research has revealed main parameters found responsible for customer churn were data rate, call failure rate, mean time to repair and monthly billing amount. On the basis of these parameter artificial neural network has achieved 79% more efficiency as compare to low performing statistical techniques.


2020 ◽  
Vol 13 (5) ◽  
pp. 2185-2196
Author(s):  
Stephan Rasp

Abstract. Over the last couple of years, machine learning parameterizations have emerged as a potential way to improve the representation of subgrid processes in Earth system models (ESMs). So far, all studies were based on the same three-step approach: first a training dataset was created from a high-resolution simulation, then a machine learning algorithm was fitted to this dataset, before the trained algorithm was implemented in the ESM. The resulting online simulations were frequently plagued by instabilities and biases. Here, coupled online learning is proposed as a way to combat these issues. Coupled learning can be seen as a second training stage in which the pretrained machine learning parameterization, specifically a neural network, is run in parallel with a high-resolution simulation. The high-resolution simulation is kept in sync with the neural network-driven ESM through constant nudging. This enables the neural network to learn from the tendencies that the high-resolution simulation would produce if it experienced the states the neural network creates. The concept is illustrated using the Lorenz 96 model, where coupled learning is able to recover the “true” parameterizations. Further, detailed algorithms for the implementation of coupled learning in 3D cloud-resolving models and the super parameterization framework are presented. Finally, outstanding challenges and issues not resolved by this approach are discussed.


2012 ◽  
Author(s):  
Teoh Yeong Kin ◽  
Suzanawati Abu Hasan ◽  
Norhisam Bulot ◽  
Mohammad Hafiz Ismail

Sign in / Sign up

Export Citation Format

Share Document