Transforming seismic data into pseudo-RGB images to predict CO2 leakage using pre-learned convolutional neural networks weights

Author(s):  
Rafael Pires de Lima ◽  
Youzuo Lin ◽  
Kurt J. Marfurt
Geophysics ◽  
2021 ◽  
pp. 1-77
Author(s):  
Hanchen Wang ◽  
Tariq Alkhalifah

The ample size of time-lapse data often requires significant event detection and source location efforts, especially in areas like shale gas exploration regions where a large number of micro-seismic events are often recorded. In many cases, the real-time monitoring and locating of these events are essential to production decisions. Conventional methods face considerable drawbacks. For example, traveltime-based methods require traveltime picking of often noisy data, while migration and waveform inversion methods require expensive wavefield solutions and event detection. Both tasks require some human intervention, and this becomes a big problem when too many sources need to be located, which is common in micro-seismic monitoring. Machine learning has recently been used to identify micro-seismic events or locate their sources once they are identified and picked. We propose to use a novel artificial neural network framework to directly map seismic data, without any event picking or detection, to their potential source locations. We train two convolutional neural networks on labeled synthetic acoustic data containing simulated micro-seismic events to fulfill such requirements. One convolutional neural network, which has a global average pooling layer to reduce the computational cost while maintaining high-performance levels, aims to classify the number of events in the data. The other network predicts the source locations and other source features such as the source peak frequencies and amplitudes. To reduce the size of the input data to the network, we correlate the recorded traces with a central reference trace to allow the network to focus on the curvature of the input data near the zero-lag region. We train the networks to handle single, multi, and no event segments extracted from the data. Tests on a simple vertical varying model and a more realistic Otway field model demonstrate the approach's versatility and potential.


Sensors ◽  
2017 ◽  
Vol 17 (12) ◽  
pp. 2930 ◽  
Author(s):  
Søren Skovsen ◽  
Mads Dyrmann ◽  
Anders Mortensen ◽  
Kim Steen ◽  
Ole Green ◽  
...  

Author(s):  
C. Yang ◽  
F. Rottensteiner ◽  
C. Heipke

<p><strong>Abstract.</strong> Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%.</p>


2018 ◽  
Author(s):  
Rollyn Labuguen (P) ◽  
Vishal Gaurav ◽  
Salvador Negrete Blanco ◽  
Jumpei Matsumoto ◽  
Kenichi Inoue ◽  
...  

AbstractUnderstanding animal behavior in its natural habitat is a challenging task. One of the primary step for analyzing animal behavior is feature detection. In this study, we propose the use of deep convolutional neural network (CNN) to locate monkey features from raw RGB images of monkey in its natural environment. We train the model to identify features such as the nose and shoulders of the monkey at about 0.01 model loss.


Procedia CIRP ◽  
2020 ◽  
Vol 93 ◽  
pp. 1292-1297 ◽  
Author(s):  
Markus Kreutz ◽  
Abderrahim Ait Alla ◽  
Anatoli Eisenstadt ◽  
Michael Freitag ◽  
Klaus-Dieter Thoben

Sign in / Sign up

Export Citation Format

Share Document