scholarly journals Unsupervised deep learning with higher-order total-variation regularization for multidimensional seismic data reconstruction

Geophysics ◽  
2021 ◽  
pp. 1-62
Author(s):  
Thomas André Larsen Greiner ◽  
Jan Erik Lie ◽  
Odd Kolbjørnsen ◽  
Andreas Kjelsrud Evensen ◽  
Espen Harris Nilsen ◽  
...  

In 3D marine seismic acquisition, the seismic wavefield is not sampled uniformly in the spatial directions. This leads to a seismic wavefield consisting of irregularly and sparsely populated traces with large gaps between consecutive sail-lines especially in the near-offsets. The problem of reconstructing the complete seismic wavefield from a subsampled and incomplete wavefield, is formulated as an underdetermined inverse problem. We investigate unsupervised deep learning based on a convolutional neural network (CNN) for multidimensional wavefield reconstruction of irregularly populated traces defined on a regular grid. The proposed network is based on an encoder-decoder architecture with an overcomplete latent representation, including appropriate regularization penalties to stabilize the solution. We proposed a combination of penalties, which consists of the L2-norm penalty on the network parameters, and a first- and second-order total-variation (TV) penalty on the model. We demonstrate the performance of the proposed method on broad-band synthetic data, and field data represented by constant-offset gathers from a source-over-cable data set from the Barents Sea. In the field data example we compare the results to a full production flow from a contractor company, which is based on a 5D Fourier interpolation approach. In this example, our approach displays improved reconstruction of the wavefield with less noise in the sparse near-offsets compared to the industry approach, which leads to improved structural definition of the near offsets in the migrated sections.

2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2021 ◽  
Vol 40 (10) ◽  
pp. 768-777
Author(s):  
Vemund S. Thorkildsen ◽  
Leiv-J. Gelius ◽  
Enders A. Robinson

If an optical hologram is broken into pieces, a virtual object can still be reconstructed from each of the fragments. This reconstruction is possible because each diffraction point emits waves that reach every point of the hologram. Thus, the entire object is encoded into each subset of the hologram. Analogous to the broken hologram, the use of undersampled seismic data violating the Nyquist-Shannon sampling theorem may still give a well-resolved image of the subsurface. A theoretical framework of this idea has already been introduced in the literature and denoted as holistic migration. However, the general lack of seismic field data demonstrations has inspired the study presented here. Since the optical hologram is diffraction-driven, we propose to employ diffraction-separated data and not conventional reflection data as input for holistic migration. We follow the original idea and regularly undersample the data spatially. Such a sampling strategy will result in coherent noise in the image domain. We therefore introduce a novel signal processing technique to remove such noise. The feasibility of the proposed approach is demonstrated employing the Sigsbee2a controlled data set and field data from the Barents Sea.


2021 ◽  
Vol 13 (2) ◽  
pp. 190
Author(s):  
Bouthayna Msellmi ◽  
Daniele Picone ◽  
Zouhaier Ben Rabah ◽  
Mauro Dalla Mura ◽  
Imed Riadh Farah

In this research study, we deal with remote sensing data analysis over high dimensional space formed by hyperspectral images. This task is generally complex due to the large spectral, spatial richness, and mixed pixels. Thus, several spectral un-mixing methods have been proposed to discriminate mixing spectra by estimating the classes and their presence rates. However, information related to mixed pixel composition is very interesting for some applications, but it is insufficient for many others. Thus, it is necessary to have much more data about the spatial localization of the classes detected during the spectral un-mixing process. To solve the above-mentioned problem and specify the spatial location of the different land cover classes in the mixed pixel, sub-pixel mapping techniques were introduced. This manuscript presents a novel sub-pixel mapping process relying on K-SVD (K-singular value decomposition) learning and total variation as a spatial regularization parameter (SMKSVD-TV: Sub-pixel Mapping based on K-SVD dictionary learning and Total Variation). The proposed approach adopts total variation as a spatial regularization parameter, to make edges smooth, and a pre-constructed spatial dictionary with the K-SVD dictionary training algorithm to have more spatial configurations at the sub-pixel level. It was tested and validated with three real hyperspectral data. The experimental results reveal that the attributes obtained by utilizing a learned spatial dictionary with isotropic total variation allowed improving the classes sub-pixel spatial localization, while taking into account pre-learned spatial patterns. It is also clear that the K-SVD dictionary learning algorithm can be applied to construct a spatial dictionary, particularly for each data set.


Businesses ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 51-71
Author(s):  
Oussama Benbrahim Ansari

Spatial clustering is a fundamental instrument in modern geo-marketing. The complexity of handling of high-dimensional and geo-referenced data in the context of distribution networks imposes important challenges for marketers to catch the right customer segments with useful pattern similarities. The increasing availability of the geo-referenced data also places more pressure on the existing geo-marketing methods and makes it more difficult to detect hidden or non-linear relationships between the variables. In recent years, artificial neural networks have been established in different disciplines such as engineering, medical diagnosis, or finance, to solve complex problems due to their high performance and accuracy. The purpose of this paper is to perform a market segmentation by using unsupervised deep learning with self-organizing maps in the B2B industrial automation market across the United States. The results of this study demonstrate a high clustering performance (4 × 4 neurons) as well as a significant dimensionality reduction by using self-organizing maps. The high level of visualization of the maps out of the initially unorganized data set allows a comprehensive interpretation of the different clusters and patterns across space. The centroids of the clusters have been identified as footprints for assigning new marketing channels to ensure a better market coverage.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


Sign in / Sign up

Export Citation Format

Share Document