Parabolic dictionary learning for seismic wavefield reconstruction across the streamers

Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. V263-V282 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner ◽  
Leiv Gelius

Dictionary learning (DL) methods are effective tools to automatically find a sparse representation of a data set. They train a set of basis vectors on the data to capture the morphology of the redundant signals. The basis vectors are called atoms, and the set is referred to as the dictionary. This dictionary can be used to represent the data in a sparse manner with a linear combination of a few of its atoms. In conventional DL, the atoms are unstructured and are only numerically defined over a grid that has the same sampling as the data. Consequently, the atoms are unknown away from this sampling grid, and a sparse representation of the data in the dictionary domain is not sufficient information to interpolate the data. To overcome this limitation, we have developed a DL method called parabolic DL, in which each learned atom is constrained to represent an elementary waveform that has a constant amplitude along a parabolic traveltime moveout. The parabolic structure is consistent with the physics inherent to the seismic wavefield and can be used to easily interpolate or extrapolate the atoms. Hence, we have developed a parabolic DL-based process to interpolate and regularize seismic data. Briefly, it consists of learning a parabolic dictionary from the data, finding a sparse representation of the data in the dictionary domain, interpolating the dictionary atoms over the desired grid, and, finally, taking the sparse representation of the data in the interpolated dictionary domain. We examine three characteristics of this method, i.e., the parabolic structure, the sparsity promotion, and the adaptation to the data, and we conclude that they strengthen robustness to noise and to aliasing and that they increase the accuracy of the interpolation. For both synthetic and field data sets, we have successful seismic wavefield reconstructions across the streamers for typical 3D acquisition geometries.

2021 ◽  
Author(s):  
Mahdi Marsousi

The Sparse representation research field and applications have been rapidly growing during the past 15 years. The use of overcomplete dictionaries in sparse representation has gathered extensive attraction. Sparse representation was followed by the concept of adapting dictionaries to the input data (dictionary learning). The K-SVD is a well-known dictionary learning approach and is widely used in different applications. In this thesis, a novel enhancement to the K-SVD algorithm is proposed which creates a learnt dictionary with a specific number of atoms adapted for the input data set. To increase the efficiency of the orthogonal matching pursuit (OMP) method, a new sparse representation method is proposed which applies a multi-stage strategy to reduce computational cost. A new phase included DCT (PI-DCT) dictionary is also proposed which significantly reduces the blocking artifact problem of using the conventional DCT. The accuracy and efficiency of the proposed methods are then compared with recent approaches that demonstrate the promising performance of the methods proposed in this thesis.


Author(s):  
Charles Sabin ◽  
Pavel Plevka

Hemihedral twinning is a crystal-growth anomaly in which a specimen is composed of two crystal domains that coincide with each other in three dimensions. However, the orientations of the crystal lattices in the two domains differ in a specific way. In diffraction data collected from hemihedrally twinned crystals, each observed intensity contains contributions from both of the domains. With perfect hemihedral twinning, the two domains have the same volumes and the observed intensities do not contain sufficient information to detwin the data. Here, the use of molecular replacement and of noncrystallographic symmetry (NCS) averaging to detwin a 2.1 Å resolution data set forAichi virus 1affected by perfect hemihedral twinning is described. The NCS averaging enabled the correction of errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. The procedure permitted the structure to be determined from a molecular-replacement model that had 16% sequence identity and a 1.6 Å r.m.s.d. for Cαatoms in comparison to the crystallized structure. The same approach could be used to solve other data sets affected by perfect hemihedral twinning from crystals with NCS.


1997 ◽  
Vol 3 (S2) ◽  
pp. 933-934
Author(s):  
C. Gatts ◽  
A. Mariano

The natural ability of Artificial Neural Networks to perform pattern recognition tasks makes them a valuable tool in Electron Microscopy, especially when large data sets are involved. The application of Neural Pattern Recognition to HREM, although incipient, has already produced interesting results both for one dimensional spectra and 2D images.In the case of ID spectra, e.g. a set of EELS spectra acquired during a line scan, given a “vigilance parameter” (which sets the threshold for the correlation between two spectra to be high enough to consider them as similar) an ART-like network can distribute the incoming spectra into classes of similarity, defining a standard representation for each class. In order to enhance the discrimination ability of the network, the standard representations are orthonormalized, allowing for subtle differences betwen spectra and peak overlapping to be resolved. The projection of the incoming vectors onto the basis vectors thus formed gives rise to a profile of the data set.


2021 ◽  
Author(s):  
Mahdi Marsousi

The Sparse representation research field and applications have been rapidly growing during the past 15 years. The use of overcomplete dictionaries in sparse representation has gathered extensive attraction. Sparse representation was followed by the concept of adapting dictionaries to the input data (dictionary learning). The K-SVD is a well-known dictionary learning approach and is widely used in different applications. In this thesis, a novel enhancement to the K-SVD algorithm is proposed which creates a learnt dictionary with a specific number of atoms adapted for the input data set. To increase the efficiency of the orthogonal matching pursuit (OMP) method, a new sparse representation method is proposed which applies a multi-stage strategy to reduce computational cost. A new phase included DCT (PI-DCT) dictionary is also proposed which significantly reduces the blocking artifact problem of using the conventional DCT. The accuracy and efficiency of the proposed methods are then compared with recent approaches that demonstrate the promising performance of the methods proposed in this thesis.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. V169-V183 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Rushan Wu ◽  
Maocai Jiang ◽  
Yangkang Chen

In recent years, sparse representation is seeing increasing application to fundamental signal and image-processing tasks. In sparse representation, a signal can be expressed as a linear combination of a dictionary (atom signals) and sparse coefficients. Dictionary learning has a critical role in obtaining a state-of-the-art sparse representation. A good dictionary should capture the representative features of the data. The whole signal can be used as training patches to learn a dictionary. However, this approach suffers from high computational costs, especially for a 3D cube. A common method is to randomly select some patches from given data as training patches to accelerate the learning process. However, the random selection method without any prior information will damage the signal if the selected patches for training are inappropriately chosen from a simple structure (e.g., training patches are chosen from a simple structure to recover the complex structure). We have developed a dip-oriented dictionary learning method, which incorporates an estimation of the dip field into the selection procedure of training patches. In the proposed approach, patches with a large dip value are selected for the training. However, it is not easy to estimate an accurate dip field from the noisy data directly. Hence, we first apply a curvelet-transform noise reduction method to remove some fine-scale components that presumably contain mostly random noise, and we then calculate a more reliable dip field from the preprocessed data to guide the patch selection. Numerical tests on synthetic shot records and field seismic image examples demonstrate that the proposed method can obtain a similar result compared with the method trained on the entire data set and obtain a better denoised result compared with the random selection method. We also compare the performance using of the proposed method and those methods based on curvelet thresholding and rank reduction on a synthetic shot record.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


2013 ◽  
Vol 756-759 ◽  
pp. 3652-3658
Author(s):  
You Li Lu ◽  
Jun Luo

Under the study of Kernel Methods, this paper put forward two improved algorithm which called R-SVM & I-SVDD in order to cope with the imbalanced data sets in closed systems. R-SVM used K-means algorithm clustering space samples while I-SVDD improved the performance of original SVDD by imbalanced sample training. Experiment of two sets of system call data set shows that these two algorithms are more effectively and R-SVM has a lower complexity.


Sign in / Sign up

Export Citation Format

Share Document