scholarly journals Fast and robust common-reflection-surface parameter estimation

Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. O1-O13 ◽  
Author(s):  
Anders U. Waldeland ◽  
Hao Zhao ◽  
Jorge H. Faccipieri ◽  
Anne H. Schistad Solberg ◽  
Leiv-J. Gelius

The common-reflection-surface (CRS) method offers a stack with higher signal-to-noise ratio at the cost of a time-consuming semblance search to obtain the stacking parameters. We have developed a fast method for extracting the CRS parameters using local slope and curvature. We estimate the slope and curvature with the gradient structure tensor and quadratic structure tensor on stacked data. This is done under the assumption that a stacking velocity is already available. Our method was compared with an existing slope-based method, in which the slope is extracted from prestack data. An experiment on synthetic data shows that our method has increased robustness against noise compared with the existing method. When applied to two real data sets, our method achieves accuracy comparable with the pragmatic and full semblance searches. Our method has the advantage of being approximately two and four orders of magnitude faster than the semblance searches.

Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


Author(s):  
Anteneh Ayanso ◽  
Paulo B. Goes ◽  
Kumar Mehta

Relational databases have increasingly become the basis for a wide range of applications that require efficient methods for exploratory search and retrieval. Top-k retrieval addresses this need and involves finding a limited number of records whose attribute values are the closest to those specified in a query. One of the approaches in the recent literature is query-mapping which deals with converting top-k queries into equivalent range queries that relational database management systems (RDBMSs) normally support. This approach combines the advantages of simplicity as well as practicality by avoiding the need for modifications to the query engine, or specialized data structures and indexing techniques to handle top-k queries separately. This paper reviews existing query-mapping techniques in the literature and presents a range query estimation method based on cost modeling. Experiments on real world and synthetic data sets show that the cost-based range estimation method performs at least as well as prior methods and avoids the need to calibrate workloads on specific database contents.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3158
Author(s):  
Jian Yang ◽  
Xiaojuan Ban ◽  
Chunxiao Xing

With the rapid development of mobile networks and smart terminals, mobile crowdsourcing has aroused the interest of relevant scholars and industries. In this paper, we propose a new solution to the problem of user selection in mobile crowdsourcing system. The existing user selection schemes mainly include: (1) find a subset of users to maximize crowdsourcing quality under a given budget constraint; (2) find a subset of users to minimize cost while meeting minimum crowdsourcing quality requirement. However, these solutions have deficiencies in selecting users to maximize the quality of service of the task and minimize costs. Inspired by the marginalism principle in economics, we wish to select a new user only when the marginal gain of the newly joined user is higher than the cost of payment and the marginal cost associated with integration. We modeled the scheme as a marginalism problem of mobile crowdsourcing user selection (MCUS-marginalism). We rigorously prove the MCUS-marginalism problem to be NP-hard, and propose a greedy random adaptive procedure with annealing randomness (GRASP-AR) to achieve maximize the gain and minimize the cost of the task. The effectiveness and efficiency of our proposed approaches are clearly verified by a large scale of experimental evaluations on both real-world and synthetic data sets.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. J35-J48 ◽  
Author(s):  
Bernard Giroux ◽  
Abderrezak Bouchedda ◽  
Michel Chouteau

We introduce two new traveltime picking schemes developed specifically for crosshole ground-penetrating radar (GPR) applications. The main objective is to automate, at least partially, the traveltime picking procedure and to provide first-arrival times that are closer in quality to those of manual picking approaches. The first scheme is an adaptation of a method based on cross-correlation of radar traces collated in gathers according to their associated transmitter-receiver angle. A detector is added to isolate the first cycle of the radar wave and to suppress secon-dary arrivals that might be mistaken for first arrivals. To improve the accuracy of the arrival times obtained from the crosscorrelation lags, a time-rescaling scheme is implemented to resize the radar wavelets to a common time-window length. The second method is based on the Akaike information criterion(AIC) and continuous wavelet transform (CWT). It is not tied to the restrictive criterion of waveform similarity that underlies crosscorrelation approaches, which is not guaranteed for traces sorted in common ray-angle gathers. It has the advantage of being automated fully. Performances of the new algorithms are tested with synthetic and real data. In all tests, the approach that adds first-cycle isolation to the original crosscorrelation scheme improves the results. In contrast, the time-rescaling approach brings limited benefits, except when strong dispersion is present in the data. In addition, the performance of crosscorrelation picking schemes degrades for data sets with disparate waveforms despite the high signal-to-noise ratio of the data. In general, the AIC-CWT approach is more versatile and performs well on all data sets. Only with data showing low signal-to-noise ratios is the AIC-CWT superseded by the modified crosscorrelation picker.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. V33-V45 ◽  
Author(s):  
Charlotte Sanchis ◽  
Alfred Hanssen

Stacking is a common technique to improve the signal-to-noise ratio (S/N) and the imaging quality of seismic data. Conventional stacking that averages equally a collection of normal moveout corrected or migrated shot gathers with a common reflection point is not always satisfactory. Instead, we propose a novel time-dependent weighted average stacking method that utilizes local correlation between each individual trace and a chosen reference trace as a measure of weight and a new weight normalization scheme that ensures meaningful amplitudes of the output. Three different reference traces have been proposed. These are based on conventional stacking, S/N estimation, and Kalman filtering. The outputs of the enhanced stacking methods, as well as their reference traces, were compared on both synthetic data and real marine migrated subsalt data. We conclude that both S/N estimation and Kalman reference stacking methods as well as the output of the enhanced stacking method yield consistently better results than conventional stacking. They exhibit cleaner and better defined reflection events and a larger number of reflections. We found that the Kalman reference method produces the best overall seismic image contrast and reveals many more reflected events, but at the cost of a higher noise level and a longer processing time. Thus, enhanced stacking using S/N estimation as reference method is a possible alternative that has the advantages of running faster, but also emphasizes some reflected events under the subsalt structure.


2015 ◽  
Vol 8 (10) ◽  
pp. 10387-10428 ◽  
Author(s):  
G. D'Amico ◽  
A. Amodeo ◽  
I. Mattis ◽  
V. Freudenthaler ◽  
G. Pappalardo

Abstract. In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


2016 ◽  
Vol 9 (2) ◽  
pp. 491-507 ◽  
Author(s):  
Giuseppe D'Amico ◽  
Aldo Amodeo ◽  
Ina Mattis ◽  
Volker Freudenthaler ◽  
Gelsomina Pappalardo

Abstract. In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. V79-V89 ◽  
Author(s):  
Wail A. Mousa ◽  
Abdullatif A. Al-Shuhail ◽  
Ayman Al-Lehyani

We introduce a new method for first-arrival picking based on digital color-image segmentation of energy ratios of refracted seismic data. The method uses a new color-image segmentation scheme based on projection onto convex sets (POCS). The POCS requires a reference color for the first break and one iteration to segment the first-break amplitudes from other arrivals. We tested the segmentation method on synthetic seismic data sets with various amounts of additive Gaussian noise. The proposed method gives similar performance to a modified version of Coppens’ method for traces with high signal-to-noise ratio and medium-to-large offsets. Finally, we applied our method and used as well the modified first-arrival picking method based on Coppens’ method to pick the first arrivals on four real data sets, where both were compared to the first breaks that were picked manually and then interpolated. Based on an assessment error of a 20-ms window with respect to manual picks that are interpolated, we find that our method gives comparable performance to Coppens’ method, depending on the data difficulty of picking first arrivals. Therefore, we believe that our proposed method is a good new addition to the existing methods of first-arrival picking.


2013 ◽  
Vol 748 ◽  
pp. 590-594
Author(s):  
Li Liao ◽  
Yong Gang Lu ◽  
Xu Rong Chen

We propose a novel density estimation method using both the k-nearest neighbor (KNN) graph and the potential field of the data points to capture the local and global data distribution information respectively. The clustering is performed based on the computed density values. A forest of trees is built using each data point as the tree node. And the clusters are formed according to the trees in the forest. The new clustering method is evaluated by comparing with three popular clustering methods, K-means++, Mean Shift and DBSCAN. Experiments on two synthetic data sets and one real data set show that our approach can effectively improve the clustering results.


Sign in / Sign up

Export Citation Format

Share Document