Seismic trace interpolation for irregularly spatial sampled data using convolutional autoencoder

Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V119-V130 ◽  
Author(s):  
Yingying Wang ◽  
Benfeng Wang ◽  
Ning Tu ◽  
Jianhua Geng

Seismic trace interpolation is an important technique because irregular or insufficient sampling data along the spatial direction may lead to inevitable errors in multiple suppression, imaging, and inversion. Many interpolation methods have been studied for irregularly sampled data. Inspired by the working idea of the autoencoder and convolutional neural network, we have performed seismic trace interpolation by using the convolutional autoencoder (CAE). The irregularly sampled data are taken as corrupted data. By using a training data set including pairs of the corrupted and complete data, CAE can automatically learn to extract features from the corrupted data and reconstruct the complete data from the extracted features. It can avoid some assumptions in the traditional trace interpolation method such as the linearity of events, low-rankness, or sparsity. In addition, once the CAE network training is completed, the corrupted seismic data can be interpolated immediately with very low computational cost. A CAE network composed of three convolutional layers and three deconvolutional layers is designed to explore the capabilities of CAE-based seismic trace interpolation for an irregularly sampled data set. To solve the problem of rare complete shot gathers in field data applications, the trained network on synthetic data is used as an initialization of the network training on field data, called the transfer learning strategy. Experiments on synthetic and field data sets indicate the validity and flexibility of the trained CAE. Compared with the curvelet-transform-based method, CAE can lead to comparable or better interpolation performances efficiently. The transfer learning strategy enhances the training efficiency on field data and improves the interpolation performance of CAE with limited training data.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mustafa Radha ◽  
Pedro Fonseca ◽  
Arnaud Moreau ◽  
Marco Ross ◽  
Andreas Cerny ◽  
...  

AbstractUnobtrusive home sleep monitoring using wrist-worn wearable photoplethysmography (PPG) could open the way for better sleep disorder screening and health monitoring. However, PPG is rarely included in large sleep studies with gold-standard sleep annotation from polysomnography. Therefore, training data-intensive state-of-the-art deep neural networks is challenging. In this work a deep recurrent neural network is first trained using a large sleep data set with electrocardiogram (ECG) data (292 participants, 584 recordings) to perform 4-class sleep stage classification (wake, rapid-eye-movement, N1/N2, and N3). A small part of its weights is adapted to a smaller, newer PPG data set (60 healthy participants, 101 recordings) through three variations of transfer learning. Best results (Cohen’s kappa of 0.65 ± 0.11, accuracy of 76.36 ± 7.57%) were achieved with the domain and decision combined transfer learning strategy, significantly outperforming the PPG-trained and ECG-trained baselines. This performance for PPG-based 4-class sleep stage classification is unprecedented in literature, bringing home sleep stage monitoring closer to clinical use. The work demonstrates the merit of transfer learning in developing reliable methods for new sensor technologies by reusing similar, older non-wearable data sets. Further study should evaluate our approach in patients with sleep disorders such as insomnia and sleep apnoea.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yikui Zhai ◽  
He Cao ◽  
Wenbo Deng ◽  
Junying Gan ◽  
Vincenzo Piuri ◽  
...  

Because of the lack of discriminative face representations and scarcity of labeled training data, facial beauty prediction (FBP), which aims at assessing facial attractiveness automatically, has become a challenging pattern recognition problem. Inspired by recent promising work on fine-grained image classification using the multiscale architecture to extend the diversity of deep features, BeautyNet for unconstrained facial beauty prediction is proposed in this paper. Firstly, a multiscale network is adopted to improve the discriminative of face features. Secondly, to alleviate the computational burden of the multiscale architecture, MFM (max-feature-map) is utilized as an activation function which can not only lighten the network and speed network convergence but also benefit the performance. Finally, transfer learning strategy is introduced here to mitigate the overfitting phenomenon which is caused by the scarcity of labeled facial beauty samples and improves the proposed BeautyNet’s performance. Extensive experiments performed on LSFBD demonstrate that the proposed scheme outperforms the state-of-the-art methods, which can achieve 67.48% classification accuracy.


Animals ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 2402
Author(s):  
Jennifer Salau ◽  
Joachim Krieter

With increasing herd sizes came an enhanced requirement for automated systems to support the farmers in the monitoring of the health and welfare status of their livestock. Cattle are a highly sociable species, and the herd structure has important impact on the animal welfare. As the behaviour of the animals and their social interactions can be influenced by the presence of a human observer, a camera based system that automatically detects the animals would be beneficial to analyse dairy cattle herd activity. In the present study, eight surveillance cameras were mounted above the barn area of a group of thirty-six lactating Holstein Friesian dairy cows at the Chamber of Agriculture in Futterkamp in Northern Germany. With Mask R-CNN, a state-of-the-art model of convolutional neural networks was trained to determine pixel level segmentation masks for the cows in the video material. The model was pre-trained on the Microsoft common objects in the context data set, and transfer learning was carried out on annotated image material from the recordings as training data set. In addition, the relationship between the size of the used training data set and the performance on the model after transfer learning was analysed. The trained model achieved averaged precision (Intersection over union, IOU = 0.5) 91% and 85% for the detection of bounding boxes and segmentation masks of the cows, respectively, thereby laying a solid technical basis for an automated analysis of herd activity and the use of resources in loose-housing.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.


Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


2021 ◽  
Author(s):  
Nguyen Ha Huy Cuong

Abstract In agriculture, a timely and accurate estimate of ripeness in the orchard improves the post-harvest process. Choosing fruits based on their maturity stages can reduce storage costs and increase market results. In addition, the estimation of the ripeness of the fruit based on the detection of input and output indicators has brought about practical effects in the harvesting process, as well as determining the amount of water needed for irrigation. pepper, the amount of fertilizer for the end of the season appropriate. In this paper, propose a technical solution for a model to detect persimmon green grapefruit fruit at agricultural farms, Vietnam. Aggregation model and transfer learning method are used. The proposed model contains two object detection sub models and the decision model is the pre-processed model, the transfer model and the corresponding aggregation model. Improving the YOLO algorithm is trained with more than one hundred object types, the total proposed processing is 500,000 images, from the COCO image data set used as a preprocessing model. Aggregation model and transfer learning method are also used as an initial step to train the model transferred by the transfer learning technique. Only images are used for transfer model training. Finally, the aggregation model with the techniques used to make decisions selects the best results from the pre-trained model and the transfer model. Using our proposed model, it has improved and reduced the time when analyzing the maximum number of training data sets and training time. The accuracy of model union is 98.20%. The test results of the classifier are proposed through a data set of 10000 images of each layer for sensitivity of 98.2%, specificity 97.2% with accuracy of 96.5% and 0, 98 in training for all grades.


2018 ◽  
pp. 73-78
Author(s):  
V. V. Kuzmina ◽  
A. V. Khamukhin ◽  
A. I. Kononova

An experience of the neural network data set creation automation, which is used for license plate recognition, has been presented. The main problem of neural network training with data, obtained by nature filming is that collecting require amount of data takes a long time, beside this neural network does not effectively recognizer rare license plate formats after training. The main objective of the work is to improve recognition quality and training speed of the neural network. To achieve this objective, training data set is formed from automatically generated license plate images. Projective transformation are used for filming distortion imitation. The data set, generated in this way, includes all license plate standards, and the rare kinds percentage is enough to effectively recognize them. Using of the presented generator allows not only to significantly accelerate training data set creation, but also to improve rarely used standards of license plates recognition quality.


Geophysics ◽  
2021 ◽  
pp. 1-103
Author(s):  
Jiho Park ◽  
Jihun Choi ◽  
Soon Jee Seol ◽  
Joongmoo Byun ◽  
Young Kim

Deep learning (DL) methods are recently introduced for seismic signal processing. Using DL methods, many researchers have adopted these novel techniques in an attempt to construct a DL model for seismic data reconstruction. The performance of DL-based methods depends heavily on what is learned from the training data. We focus on constructing the DL model that well reflect the features of target data sets. The main goal is to integrate DL with an intuitive data analysis approach that compares similar patterns prior to the DL training stage. We have developed a two-sequential method consisting of two stage: (i) analyzing training and target data sets simultaneously for determining target-informed training set and (ii) training the DL model with this training data set to effectively interpolate the seismic data. Here, we introduce the convolutional autoencoder t-distributed stochastic neighbor embedding (CAE t-SNE) analysis that can provide the insight into the results of interpolation through the analysis of both the training and target data sets prior to DL model training. The proposed method were tested with synthetic and field data. Dense seismic gathers (e.g. common-shot gathers; CSGs) were used as a labeled training data set, and relatively sparse seismic gather (e.g. common-receiver gathers; CRGs) were reconstructed in both cases. The reconstructed results and SNRs demonstrated that the training data can be efficiently selected using CAE t-SNE analysis and the spatial aliasing of CRGs was successfully alleviated by the trained DL model with this training data, which contain target features. These results imply that the data analysis for selecting target-informed training set is very important for successful DL interpolation. Additionally, the proposed analysis method can also be applied to investigate the similarities between training and target data sets for another DL-based seismic data reconstruction tasks.


2001 ◽  
Vol 11 (02) ◽  
pp. 167-177 ◽  
Author(s):  
I. M. GALVÁN ◽  
P. ISASI ◽  
R. ALER ◽  
J. M. VALLS

Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.


Sign in / Sign up

Export Citation Format

Share Document