A fast algorithm for sparse multichannel blind deconvolution

Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. V7-V16 ◽  
Author(s):  
Kenji Nose-Filho ◽  
André K. Takahata ◽  
Renato Lopes ◽  
João M. T. Romano

We have addressed blind deconvolution in a multichannel framework. Recently, a robust solution to this problem based on a Bayesian approach called sparse multichannel blind deconvolution (SMBD) was proposed in the literature with interesting results. However, its computational complexity can be high. We have proposed a fast algorithm based on the minimum entropy deconvolution, which is considerably less expensive. We designed the deconvolution filter to minimize a normalized version of the hybrid [Formula: see text]-norm loss function. This is in contrast to the SMBD, in which the hybrid [Formula: see text]-norm function is used as a regularization term to directly determine the deconvolved signal. Results with synthetic data determined that the performance of the obtained deconvolution filter was similar to the one obtained in a supervised framework. Similar results were also obtained in a real marine data set for both techniques.

Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCA199-WCA209 ◽  
Author(s):  
Guojian Shan ◽  
Robert Clapp ◽  
Biondo Biondi

We have extended isotropic plane-wave migration in tilted coordinates to 3D anisotropic media and applied it on a Gulf of Mexico data set. Recorded surface data are transformed to plane-wave data by slant-stack processing in inline and crossline directions. The source plane wave and its corresponding slant-stacked data are extrapolated into the subsurface within a tilted coordinate system whose direction depends on the propagation direction of the plane wave. Images are generated by crosscorrelating these two wavefields. The shot sampling is sparse in the crossline direction, and the source generated by slant stacking is not really a plane-wave source but a phase-encoded source. We have discovered that phase-encoded source migration in tilted coordinates can image steep reflectors, using 2D synthetic data set examples. The field data example shows that 3D plane-wave migration in tilted coordinates can image steeply dipping salt flanks and faults, even though the one-way wave-equation operator is used for wavefield extrapolation.


2005 ◽  
Vol 17 (11) ◽  
pp. 2482-2507 ◽  
Author(s):  
Qi Zhao ◽  
David J. Miller

The goal of semisupervised clustering/mixture modeling is to learn the underlying groups comprising a given data set when there is also some form of instance-level supervision available, usually in the form of labels or pairwise sample constraints. Most prior work with constraints assumes the number of classes is known, with each learned cluster assumed to be a class and, hence, subject to the given class constraints. When the number of classes is unknown or when the one-cluster-per-class assumption is not valid, the use of constraints may actually be deleterious to learning the ground-truth data groups. We address this by (1) allowing allocation of multiple mixture components to individual classes and (2) estimating both the number of components and the number of classes. We also address new class discovery, with components void of constraints treated as putative unknown classes. For both real-world and synthetic data, our method is shown to accurately estimate the number of classes and to give favorable comparison with the recent approach of Shental, Bar-Hillel, Hertz, and Weinshall (2003).


2017 ◽  
Author(s):  
Frank Oppermann ◽  
Thomas Günther

Abstract. We present a new versatile datalogger that can be used for a wide range of possible applications in geosciences. It is adjustable in signal strength and sampling frequency, battery-saving and can remotely be controlled over Global System for Mobile Communication (GSM) connection so that it saves running costs, particulaly in monitoring experiments. Internet connection allows for checking functionality, controlling schedules and optimizing preamplification. We mainly use it for large-scale Electrical Resistivity Tomography (ERT), where it independently registers voltage time series on three channels while a square wave current is injected. For the analysis of this time series we present a new approach that is based on the Lock-In (LI) method, mainly known from electronic circuits. The method searches the working point (phase) using three different functions based on a mask signal, and determines the amplitude using a direct current (DC) correlation function. We use synthetic data with different types of noise to compare the new method with existing approaches, i.e. selective stacking and a modified Fast Fourier Transformation (FFT) based approach that assumes a 1/f noise characteristics. All methods give comparable results, the LI being better than the well established stacking method. The FFT approach can be even better but only if the noise strictly follows the assumed characteristics. If overshoots are present in the data, which is typical in the field, FFT performs worse even with good data which is why we conclude that the new LI approach is the most robust solution. This is also proved by a field data set from a long 2D ERT profile.


2020 ◽  
Vol 224 (3) ◽  
pp. 1505-1522
Author(s):  
Saeed Parnow ◽  
Behrooz Oskooi ◽  
Giovanni Florio

SUMMARY We define a two-step procedure to obtain reliable inverse models of the distribution of electrical conductivity at depth from apparent conductivities estimated by electromagnetic instruments such as GEONICS EM38, EM31 or EM 34-3. The first step of our procedure consists in the correction of the apparent conductivities to make them consistent with a low induction number condition, for which these data are very similar to the true conductivity. Then, we use a linear inversion approach to obtain a conductivity model. To improve the conductivity estimation at depth we introduced a depth-weighting function in our regularized weighted minimum length solution algorithm. We test the whole procedure on two synthetic data sets generated by the COMSOL Multiphysics for both the vertical magnetic dipole and horizontal magnetic dipole configurations of the loops. Our technique was also tested on a real data set, and the inversion result has been compared with the one obtained using the dipole-dipole DC electrical resistivity (ER) method. Our model not only reproduces all shallow conductive areas similar to the ER model, but also succeeds in replicating its deeper conductivity structures. On the contrary, inversion of uncorrected data provides a biased model underestimating the true conductivity.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. U87-U97 ◽  
Author(s):  
Mohammad Javad Khoshnavaz

Oriented time-domain imaging can be orders of magnitude faster than the routine techniques, which rely on velocity analysis. The term “oriented” refers to those techniques that use the information carried by local slopes. Time-domain dip moveout (DMO) correction, which has often been ignored by the seismic imaging community, has been coming back to attention within the last few years. I have developed an oriented time-domain DMO correction workflow that does not face the problematic loop between the dip-dependent and/or dip-independent velocities existing in the classic DMO correction algorithms. Use of the proposed approach is also advantageous over the previous oriented techniques; the proposed technique is independent of the wavefront curvature, and the input seismic data do not need to be sorted in two different domains. The application of the technique is limited to reflectors with a small curvature. The theory of the proposed technique is investigated on a simple synthetic data example and then applied to a 2D marine data set.


2018 ◽  
Vol 7 (1) ◽  
pp. 55-66 ◽  
Author(s):  
Frank Oppermann ◽  
Thomas Günther

Abstract. We present a new versatile datalogger that can be used for a wide range of possible applications in geosciences. It is adjustable in signal strength and sampling frequency, battery saving and can remotely be controlled over a Global System for Mobile Communication (GSM) connection so that it saves running costs, particularly in monitoring experiments. The internet connection allows for checking functionality, controlling schedules and optimizing pre-amplification. We mainly use it for large-scale electrical resistivity tomography (ERT), where it independently registers voltage time series on three channels, while a square-wave current is injected. For the analysis of this time series we present a new approach that is based on the lock-in (LI) method, mainly known from electronic circuits. The method searches the working point (phase) using three different functions based on a mask signal, and determines the amplitude using a direct current (DC) correlation function. We use synthetic data with different types of noise to compare the new method with existing approaches, i.e. selective stacking and a modified fast Fourier transformation (FFT)-based approach that assumes a 1∕f noise characteristics. All methods give comparable results, but the LI is better than the well-established stacking method. The FFT approach can be even better but only if the noise strictly follows the assumed characteristics. If overshoots are present in the data, which is typical in the field, FFT performs worse even with good data, which is why we conclude that the new LI approach is the most robust solution. This is also proved by a field data set from a long 2-D ERT profile.


Geophysics ◽  
2005 ◽  
Vol 70 (3) ◽  
pp. V31-V43 ◽  
Author(s):  
E. J. van Dedem ◽  
D. J. Verschuur

The theory of iterative surface-related multiple elimination holds for 2D as well as 3D wavefields. The 3D prediction of surface multiples, however, requires a dense and extended distribution of sources and receivers at the surface. Since current 3D marine acquisition geometries are very sparsely sampled in the crossline direction, the direct Fresnel summation of the multiple contributions, calculated for those surface positions at which a source and a receiver are present, cannot be applied without introducing severe aliasing effects. In this newly proposed method, the regular Fresnel summation is applied to the contributions in the densely sampled inline direction, but the crossline Fresnel summation is replaced with a sparse parametric inversion. With this procedure, 3D multiples can be predicted using the available input data. The proposed method is demonstrated on a 3D synthetic data set as well as on a 3D marine data set from offshore Norway.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Sign in / Sign up

Export Citation Format

Share Document