Enhancement of diffractions in prestack domain by means of a finite-offset double-square-root traveltime

Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. V81-V96 ◽  
Author(s):  
Tiago A. Coimbra ◽  
Jorge H. Faccipieri ◽  
João H. Speglich ◽  
Leiv-J. Gelius ◽  
Martin Tygel

Exploration of redundancy contained in the seismic data set assures enhancement of images that are based on stacking results. This enhancement is the goal of developing multiparametric traveltime equations that are able to approximate reflection and diffraction events in general source-receiver configurations. The main challenge of using these equations is to estimate a large number of parameters in a computationally feasible, reliable, and fast way. To obtain a better fit for diffraction traveltime events than the ones in the literature, we have derived a finite-offset (FO) double-square-root (DSR) diffraction traveltime equation (which depends on 10 parameters in three dimensions and four parameters in two dimensions). Moreover, to reduce the number of parameters, we have developed another version called simplified FO-DSR diffraction traveltime equation (which depends on five parameters in three dimensions and two parameters in two dimensions), which delivers a similar performance. We have developed operators that make use of the simplified FO-DSR traveltime equation to construct the so-called diffraction-only data set volumes (or, more simply, D-volumes) assuring enhancement in the diffraction extraction process. The D-volume construction has two steps: first, a stacking procedure to separate the diffraction events from the input data set and second, a spreading procedure to enhance the quality of these diffractions. As proof of concept, our approach has been tested on 2D/3D synthetic and 2D field data sets with successful results.

Author(s):  
DANIEL A. SPIELMAN ◽  
SHANG-HUA TENG ◽  
ALPER ÜNGÖR

We present a parallel Delaunay refinement algorithm for generating well-shaped meshes in both two and three dimensions. Like its sequential counterparts, the parallel algorithm iteratively improves the quality of a mesh by inserting new points, the Steiner points, into the input domain while maintaining the Delaunay triangulation. The Steiner points are carefully chosen from a set of candidates that includes the circumcenters of poorly-shaped triangular elements. We introduce a notion of independence among possible Steiner points that can be inserted simultaneously during Delaunay refinements and show that such a set of independent points can be constructed efficiently and that the number of parallel iterations is O( log 2Δ), where Δ is the spread of the input — the ratio of the longest to the shortest pairwise distances among input features. In addition, we show that the parallel insertion of these set of points can be realized by sequential Delaunay refinement algorithms such as by Ruppert's algorithm in two dimensions and Shewchuk's algorithm in three dimensions. Therefore, our parallel Delaunay refinement algorithm provides the same shape quality and mesh-size guarantees as these sequential algorithms. For generating quasi-uniform meshes, such as those produced by Chew's algorithms, the number of parallel iterations is in fact O( log Δ). To the best of our knowledge, our algorithm is the first provably polylog(Δ) time parallel Delaunay-refinement algorithm that generates well-shaped meshes of size within a constant factor of the best possible.


2011 ◽  
pp. 24-32 ◽  
Author(s):  
Nicoleta Rogovschi ◽  
Mustapha Lebbah ◽  
Younès Bennani

Most traditional clustering algorithms are limited to handle data sets that contain either continuous or categorical variables. However data sets with mixed types of variables are commonly used in data mining field. In this paper we introduce a weighted self-organizing map for clustering, analysis and visualization mixed data (continuous/binary). The learning of weights and prototypes is done in a simultaneous manner assuring an optimized data clustering. More variables has a high weight, more the clustering algorithm will take into account the informations transmitted by these variables. The learning of these topological maps is combined with a weighting process of different variables by computing weights which influence the quality of clustering. We illustrate the power of this method with data sets taken from a public data set repository: a handwritten digit data set, Zoo data set and other three mixed data sets. The results show a good quality of the topological ordering and homogenous clustering.


2017 ◽  
Vol 6 (3) ◽  
pp. 71 ◽  
Author(s):  
Claudio Parente ◽  
Massimiliano Pepe

The purpose of this paper is to investigate the impact of weights in pan-sharpening methods applied to satellite images. Indeed, different data sets of weights have been considered and compared in the IHS and Brovey methods. The first dataset contains the same weight for each band while the second takes in account the weighs obtained by spectral radiance response; these two data sets are most common in pan-sharpening application. The third data set is resulting by a new method. It consists to compute the inertial moment of first order of each band taking in account the spectral response. For testing the impact of the weights of the different data sets, WorlView-3 satellite images have been considered. In particular, two different scenes (the first in urban landscape, the latter in rural landscape) have been investigated. The quality of pan-sharpened images has been analysed by three different quality indexes: Root mean square error (RMSE), Relative average spectral error (RASE) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS).


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. A25-A29
Author(s):  
Lele Zhang

Migration of seismic reflection data leads to artifacts due to the presence of internal multiple reflections. Recent developments have shown that these artifacts can be avoided using Marchenko redatuming or Marchenko multiple elimination. These are powerful concepts, but their implementation comes at a considerable computational cost. We have derived a scheme to image the subsurface of the medium with significantly reduced computational cost and artifacts. This scheme is based on the projected Marchenko equations. The measured reflection response is required as input, and a data set with primary reflections and nonphysical primary reflections is created. Original and retrieved data sets are migrated, and the migration images are multiplied with each other, after which the square root is taken to give the artifact-reduced image. We showed the underlying theory and introduced the effectiveness of this scheme with a 2D numerical example.


2005 ◽  
Vol 5 (7) ◽  
pp. 1835-1841 ◽  
Author(s):  
S. Noël ◽  
M. Buchwitz ◽  
H. Bovensmann ◽  
J. P. Burrows

Abstract. A first validation of water vapour total column amounts derived from measurements of the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) in the visible spectral region has been performed. For this purpose, SCIAMACHY water vapour data have been determined for the year 2003 using an extended version of the Differential Optical Absorption Spectroscopy (DOAS) method, called Air Mass Corrected (AMC-DOAS). The SCIAMACHY results are compared with corresponding water vapour measurements by the Special Sensor Microwave Imager (SSM/I) and with model data from the European Centre for Medium-Range Weather Forecasts (ECMWF). In confirmation of previous results it could be shown that SCIAMACHY derived water vapour columns are typically slightly lower than both SSM/I and ECMWF data, especially over ocean areas. However, these deviations are much smaller than the observed scatter of the data which is caused by the different temporal and spatial sampling and resolution of the data sets. For example, the overall difference with ECMWF data is only -0.05 g/cm2 whereas the typical scatter is in the order of 0.5 g/cm2. Both values show almost no variation over the year. In addition, first monthly means of SCIAMACHY water vapour data have been computed. The quality of these monthly means is currently limited by the availability of calibrated SCIAMACHY spectra. Nevertheless, first comparisons with ECMWF data show that SCIAMACHY (and similar instruments) are able to provide a new independent global water vapour data set.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. U9-U22 ◽  
Author(s):  
Jide Nosakare Ogunbo ◽  
Guy Marquis ◽  
Jie Zhang ◽  
Weizhong Wang

Geophysical joint inversion requires the setting of a few parameters for optimum performance of the process. However, there are yet no known detailed procedures for selecting the various parameters for performing the joint inversion. Previous works on the joint inversion of electromagnetic (EM) and seismic data have reported parameter applications for data sets acquired from the same dimensional geometry (either in two dimensions or three dimensions) and few on variant geometry. But none has discussed the parameter selections for the joint inversion of methods from variant geometry (for example, a 2D seismic travel and pseudo-2D frequency-domain EM data). With the advantage of affordable computational cost and the sufficient approximation of a 1D EM model in a horizontally layered sedimentary environment, we are able to set optimum joint inversion parameters to perform structurally constrained joint 2D seismic traveltime and pseudo-2D EM data for hydrocarbon exploration. From the synthetic experiments, even in the presence of noise, we are able to prescribe the rules for optimum parameter setting for the joint inversion, including the choice of initial model and the cross-gradient weighting. We apply these rules on field data to reconstruct a more reliable subsurface velocity model than the one obtained by the traveltime inversions alone. We expect that this approach will be useful for performing joint inversion of the seismic traveltime and frequency-domain EM data for the production of hydrocarbon.


2000 ◽  
Vol 20 (1) ◽  
pp. 7-15 ◽  
Author(s):  
R. Heintzmann ◽  
G. Kreth ◽  
C. Cremer

Fluorescent confocal laser scanning microscopy allows an improved imaging of microscopic objects in three dimensions. However, the resolution along the axial direction is three times worse than the resolution in lateral directions. A method to overcome this axial limitation is tilting the object under the microscope, in a way that the direction of the optical axis points into different directions relative to the sample. A new technique for a simultaneous reconstruction from a number of such axial tomographic confocal data sets was developed and used for high resolution reconstruction of 3D‐data both from experimental and virtual microscopic data sets. The reconstructed images have a highly improved 3D resolution, which is comparable to the lateral resolution of a single deconvolved data set. Axial tomographic imaging in combination with simultaneous data reconstruction also opens the possibility for a more precise quantification of 3D data. The color images of this publication can be accessed from http://www.esacp.org/acp/2000/20‐1/heintzmann.htm. At this web address an interactive 3D viewer is additionally provided for browsing the 3D data. This java applet displays three orthogonal slices of the data set which are dynamically updated by user mouse clicks or keystrokes.


Author(s):  
MUSTAPHA LEBBAH ◽  
YOUNÈS BENNANI ◽  
NICOLETA ROGOVSCHI

This paper introduces a probabilistic self-organizing map for topographic clustering, analysis and visualization of multivariate binary data or categorical data using binary coding. We propose a probabilistic formalism dedicated to binary data in which cells are represented by a Bernoulli distribution. Each cell is characterized by a prototype with the same binary coding as used in the data space and the probability of being different from this prototype. The learning algorithm, Bernoulli on self-organizing map, that we propose is an application of the EM standard algorithm. We illustrate the power of this method with six data sets taken from a public data set repository. The results show a good quality of the topological ordering and homogenous clustering.


2016 ◽  
Vol 25 (3) ◽  
pp. 431-440 ◽  
Author(s):  
Archana Purwar ◽  
Sandeep Kumar Singh

AbstractThe quality of data is an important task in the data mining. The validity of mining algorithms is reduced if data is not of good quality. The quality of data can be assessed in terms of missing values (MV) as well as noise present in the data set. Various imputation techniques have been studied in MV study, but little attention has been given on noise in earlier work. Moreover, to the best of knowledge, no one has used density-based spatial clustering of applications with noise (DBSCAN) clustering for MV imputation. This paper proposes a novel technique density-based imputation (DBSCANI) built on density-based clustering to deal with incomplete values in the presence of noise. Density-based clustering algorithm proposed by Kriegal groups the objects according to their density in spatial data bases. The high-density regions are known as clusters, and the low-density regions refer to the noise objects in the data set. A lot of experiments have been performed on the Iris data set from life science domain and Jain’s (2D) data set from shape data sets. The performance of the proposed method is evaluated using root mean square error (RMSE) as well as it is compared with existing K-means imputation (KMI). Results show that our method is more noise resistant than KMI on data sets used under study.


2016 ◽  
Author(s):  
Brecht Martens ◽  
Diego G. Miralles ◽  
Hans Lievens ◽  
Robin van der Schalie ◽  
Richard A. M. de Jeu ◽  
...  

Abstract. The Global Land Evaporation Amsterdam Model (GLEAM) is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data. Ever since its development in 2011, the model has been regularly revised aiming at the optimal incorporation of new satellite-observed geophysical variables, and improving the representation of physical processes. In this study, the next version of this model (v3) is presented. Key changes relative to the previous version include: (1) a revised formulation of the evaporative stress, (2) an optimized drainage algorithm, and (3) a new soil moisture data assimilation system. GLEAM v3 is used to produce three new data sets of terrestrial evaporation and root-zone soil moisture, including a 35-year data set spanning the period 1980–2014 (v3.0a, based on satellite-observed soil moisture, vegetation optical depth and snow water equivalents, reanalysis air temperature and radiation, and a multi-source precipitation product), and two fully satellite-based data sets. The latter two share most of their forcing, except for the vegetation optical depth and soil moisture products, which are based on observations from different passive and active C- and L-band microwave sensors (European Space Agency Climate Change Initiative data sets) for the first data set (v3.0b, spanning the period 2003–2015) and observations from the Soil Moisture and Ocean Salinity satellite in the second data set (v3.0c, spanning the period 2011–2015). These three data sets are described in detail, compared against analogous data sets generated using the previous version of GLEAM (v2), and validated against measurements from 64 eddy-covariance towers and 2338 soil moisture sensors across a broad range of ecosystems. Results indicate that the quality of the v3 soil moisture is consistently better than the one from v2: average correlations against in situ surface soil moisture measurements increase from 0.61 to 0.64 in case of the v3.0a data set and the representation of soil moisture in the second layer improves as well, with correlations increasing from 0.47 to 0.53. Similar improvements are observed for the two fully satellite-based data sets. Despite regional differences, the quality of the evaporation fluxes remains overall similar as the one obtained using the previous version of GLEAM, with average correlations against eddy-covariance measurements between 0.78 and 0.80 for the three different data sets. These global data sets of terrestrial evaporation and root-zone soil moisture are now openly available at http://GLEAM.eu and may be used for large-scale hydrological applications, climate studies and research on land-atmosphere feedbacks.


Sign in / Sign up

Export Citation Format

Share Document