Density log correction for borehole effects and its impact on well-to-seismic tie: Application on a North Sea data set

2020 ◽  
Vol 8 (1) ◽  
pp. T43-T53
Author(s):  
Isadora A. S. de Macedo ◽  
Jose Jadsom S. de Figueiredo ◽  
Matias C. de Sousa

Reservoir characterization requires accurate elastic logs. It is necessary to guarantee that the logging tool is stable during the drilling process to avoid compromising the measurements of the physical properties in the formation in the vicinity of the well. Irregularities along the borehole may happen, especially if the drilling device is passing through unconsolidated formations. This affects the signals recorded by the logging tool, and the measurements may be more impacted by the drilling mud than by the formation. The caliper log indicates the change in the diameter of the borehole with depth and can be used as an indicator of the quality of other logs whose data have been degraded by the enlargement or shrinkage of the borehole wall. Damaged well-log data, particularly density and velocity profiles, affect the quality and accuracy of the well-to-seismic tie. To investigate the effects of borehole enlargement on the well-to-seismic tie, an analysis of density log correction was performed. This approach uses Doll’s geometric factor to correct the density log for wellbore enlargement using the caliper readings. Because the wavelet is an important factor on the well tie, we tested our methodology with statistical and deterministic wavelet estimations. For both cases, the results using the real data set from the Viking Graben field — North Sea indicated up to a 7% improvement on the correlation between the real and synthetic seismic traces for well-to-seismic tie when the density correction was made.

Testing is very essential in Data warehouse systems for decision making because the accuracy, validation and correctness of data depends on it. By looking to the characteristics and complexity of iData iwarehouse, iin ithis ipaper, iwe ihave itried ito ishow the scope of automated testing in assuring ibest data iwarehouse isolutions. Firstly, we developed a data set generator for creating synthetic but near to real data; then in isynthesized idata, with ithe help of hand icoded Extraction, Transformation and Loading (ETL) routine, anomalies are classified. For the quality assurance of data for a Data warehouse and to give the idea of how important the iExtraction, iTransformation iand iLoading iis, some very important test cases were identified. After that, to ensure the quality of data, the procedures of automated testing iwere iembedded iin ihand icoded iETL iroutine. Statistical analysis was done and it revealed a big enhancement in the quality of data with the procedures of automated testing. It enhances the fact that automated testing gives promising results in the data warehouse quality. For effective and easy maintenance of distributed data,a novel architecture was proposed. Although the desired result of this research is achieved successfully and the objectives are promising, but still there's a need to validate the results with the real life environment, as this research was done in simulated environment, which may not always give the desired results in real life environment. Hence, the overall potential of the proposed architecture can be seen until it is deployed to manage the real data which is distributed globally.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. G15-G24 ◽  
Author(s):  
Pejman Shamsipour ◽  
Denis Marcotte ◽  
Michel Chouteau ◽  
Martine Rivest ◽  
Abderrezak Bouchedda

The flexibility of geostatistical inversions in geophysics is limited by the use of stationary covariances, which, implicitly and mostly for mathematical convenience, assumes statistical homogeneity of the studied field. For fields showing sharp contrasts due, for example, to faults or folds, an approach based on the use of nonstationary covariances for cokriging inversion was developed. The approach was tested on two synthetic cases and one real data set. Inversion results based on the nonstationary covariance were compared to the results from the stationary covariance for two synthetic models. The nonstationary covariance better recovered the known synthetic models. With the real data set, the nonstationary assumption resulted in a better match with the known surface geology.


2012 ◽  
Vol 82 (9) ◽  
pp. 1615-1629 ◽  
Author(s):  
Bhupendra Singh ◽  
Puneet Kumar Gupta

1994 ◽  
Vol 1 (2/3) ◽  
pp. 182-190 ◽  
Author(s):  
M. Eneva

Abstract. Using finite data sets and limited size of study volumes may result in significant spurious effects when estimating the scaling properties of various physical processes. These effects are examined with an example featuring the spatial distribution of induced seismic activity in Creighton Mine (northern Ontario, Canada). The events studied in the present work occurred during a three-month period, March-May 1992, within a volume of approximate size 400 x 400 x 180 m3. Two sets of microearthquake locations are studied: Data Set 1 (14,338 events) and Data Set 2 (1654 events). Data Set 1 includes the more accurately located events and amounts to about 30 per cent of all recorded data. Data Set 2 represents a portion of the first data set that is formed by the most accurately located and the strongest microearthquakes. The spatial distribution of events in the two data sets is examined for scaling behaviour using the method of generalized correlation integrals featuring various moments q. From these, generalized correlation dimensions are estimated using the slope method. Similar estimates are made for randomly generated point sets using the same numbers of events and the same study volumes as for the real data. Uniform and monofractal random distributions are used for these simulations. In addition, samples from the real data are randomly extracted and the dimension spectra for these are examined as well. The spectra for the uniform and monofractal random generations show spurious multifractality due only to the use of finite numbers of data points and limited size of study volume. Comparing these with the spectra of dimensions for Data Set 1 and Data Set 2 allows us to estimate the bias likely to be present in the estimates for the real data. The strong multifractality suggested by the spectrum for Data Set 2 appears to be largely spurious; the spatial distribution, while different from uniform, could originate from a monofractal process. The spatial distribution of microearthquakes in Data Set 1 is either monofractal as well, or only weakly multifractal. In all similar studies, comparisons of result from real data and simulated point sets may help distinguish between genuine and artificial multifractality, without necessarily resorting to large number of data.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 527-538 ◽  
Author(s):  
E. Crase ◽  
A. Pica ◽  
M. Noble ◽  
J. McDonald ◽  
A. Tarantola

Nonlinear elastic waveform inversion has advanced to the point where it is now possible to invert real multiple‐shot seismic data. The iterative gradient algorithm that we employ can readily accommodate robust minimization criteria which tend to handle many types of seismic noise (noise bursts, missing traces, etc.) better than the commonly used least‐squares minimization criteria. Although there are many robust criteria from which to choose, we have tested only a few. In particular, the Cauchy criterion and the hyperbolic secant criterion perform very well in both noise‐free and noise‐added inversions of numerical data. Although the real data set, which we invert using the sech criterion, is marine (pressure sources and receivers) and is very much dominated by unconverted P waves, we can, for the most part, resolve the short wavelengths of both P impedance and S impedance. The long wavelengths of velocity (the background) are assumed known. Because we are deriving nearly all impedance information from unconverted P waves in this inversion, data acquisition geometry must have sufficient multiplicity in subsurface coverage and a sufficient range of offsets, just as in amplitude‐versus‐offset (AVO) inversion. However, AVO analysis is implicitly contained in elastic waveform inversion algorithms as part of the elastic wave equation upon which the algorithms are based. Because the real‐data inversion is so large—over 230,000 unknowns (340,000 when density is included) and over 600,000 data values—most statistical analyses of parameter resolution are not feasible. We qualitatively verify the resolution of our results by inverting a numerical data set which has the same acquisition geometry and corresponding long wavelengths of velocity as the real data, but has semirandom perturbations in the short wavelengths of P and S impedance.


Geophysics ◽  
2004 ◽  
Vol 69 (1) ◽  
pp. 212-221 ◽  
Author(s):  
Kevin P. Dorrington ◽  
Curtis A. Link

Neural‐network prediction of well‐log data using seismic attributes is an important reservoir characterization technique because it allows extrapolation of log properties throughout a seismic volume. The strength of neural‐networks in the area of pattern recognition is key in its success for delineating the complex nonlinear relationship between seismic attributes and log properties. We have found that good neural‐network generalization of well‐log properties can be accomplished using a small number of seismic attributes. This study presents a new method for seismic attribute selection using a genetic‐algorithm approach. The genetic algorithm attribute selection uses neural‐network training results to choose the optimal number and type of seismic attributes for porosity prediction. We apply the genetic‐algorithm attribute‐selection method to the C38 reservoir in the Stratton field 3D seismic data set. Eleven wells with porosity logs are used to train a neural network using genetic‐algorithm selected‐attribute combinations. A histogram of 50 genetic‐algorithm attribute selection runs indicates that amplitude‐based attributes are the best porosity predictors for this data set. On average, the genetic algorithm selected four attributes for optimal porosity log prediction, although the number of attributes chosen ranged from one to nine. A predicted porosity volume was generated using the best genetic‐algorithm attribute combination based on an average cross‐validation correlation coefficient. This volume suggested a network of channel sands within the C38 reservoir.


2021 ◽  
Vol 11 (4) ◽  
pp. 1643-1666
Author(s):  
Ahmed M. Elatrash ◽  
Mohammad A. Abdelwahhab ◽  
Hamdalla A. Wanas ◽  
Samir I. El-Naggar ◽  
Hasan M. Elshayeb

AbstractThe quality of a hydrocarbon reservoir is strongly controlled by the depositional and diagenetic facies nature of the given rock. Therefore, building a precise geological/depositional model of the reservoir rock is critical to reducing risks while exploring for petroleum. Ultimate reservoir characterization for constructing an adequate geological model is still challenging due to the in general insufficiency of data; particularly integrating them through combined approaches. In this paper, we integrated seismic geomorphology, sequence stratigraphy, and sedimentology, to efficiently characterize the Upper Miocene, incised-valley fill, Abu Madi Formation at South Mansoura Area (Onshore Nile Delta, Egypt). Abu Madi Formation, in the study area, is a SW-NE trending reservoir fairway consisting of alternative sequences of shales and channel-fill sandstones, of the Messinian age, that were built as a result of the River Nile sediment supply upon the Messinian Salinity Crisis. Hence, it comprises a range of continental to coastal depositional facies. We utilized dataset including seismic data, complete set of well logs, and core samples. We performed seismic attribute analysis, particularly spectral decomposition, over stratal slices to outline the geometry of the incised-valley fill. Moreover, well log analysis was done to distinguish different facies and lithofacies associations, and define their paleo-depositional environments; a preceding further look was given to the well log-based sequence stratigraphic setting as well. Furthermore, mineralogical composition and post-depositional diagenesis were identified performing petrographical analysis of some thin sections adopted from the core samples. A linkage between such approaches, performed in this study, and their impact on reservoir quality determination was aimed to shed light on a successful integrated reservoir characterization, capable of giving a robust insight into the depositional facies, and the associated petroleum potential. The results show that MSC Abu Madi Formation constitutes a third-order depositional sequence of fluvial to estuarine units, infilling the Eonile-canyon, with five sedimentary facies associations; overbank mud, fluvial channel complex, estuarine mud, tidal channels, and tidal bars; trending SW-NE with a Y-shape channel geometry. The fluvial facies association (zone 1 and 3) enriches coarse-grained sandstones, deposited in subaerial setting, with significantly higher reservoir quality, acting as the best reservoir facies of the area. Although the dissolution of detrital components, mainly feldspars, enhanced a secondary porosity, improving reservoir quality of MSC Abu Madi sediments, continental fluvial channel facies represent the main fluid flow conduits, where marine influence is limited.


2021 ◽  
Author(s):  
Xi Cheng

Abstract To solve the problem of low accuracy of traditional travel route recommendation algorithm, a travel route recommendation algorithm based on interest theme and distance matching is proposed in this paper. Firstly, the real historical travel footprints of users are obtained through analysis. Then, the user's preferences of interest theme and distance matching are proposed based on the user's stay in each scenic spot. Finally, the optimal travel route calculation method is designed under the given travel time limit, starting point and end point. Experiments on the real data set of the Flickr social network showed that the proposed algorithm has a higher accuracy rate and recall rate, compared with the traditional algorithm that only considers the interest theme and the algorithm which only considers the distance matching


Sign in / Sign up

Export Citation Format

Share Document