Automated phase attribute-based picking applied to reflection seismics

Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. V141-V150 ◽  
Author(s):  
Emanuele Forte ◽  
Matteo Dossi ◽  
Michele Pipan ◽  
Anna Del Ben

We have applied an attribute-based autopicking algorithm to reflection seismics with the aim of reducing the influence of the user’s subjectivity on the picking results and making the interpretation faster with respect to manual and semiautomated techniques. Our picking procedure uses the cosine of the instantaneous phase to automatically detect and mark as a horizon any recorded event characterized by lateral phase continuity. A patching procedure, which exploits horizon parallelism, can be used to connect consecutive horizons marking the same event but separated by noise-related gaps. The picking process marks all coherent events regardless of their reflection strength; therefore, a large number of independent horizons can be constructed. To facilitate interpretation, horizons marking different phases of the same reflection can be automatically grouped together and specific horizons from each reflection can be selected using different possible methods. In the phase method, the algorithm reconstructs the reflected wavelets by averaging the cosine of the instantaneous phase along each horizon. The resulting wavelets are then locally analyzed and confronted through crosscorrelation, allowing the recognition and selection of specific reflection phases. In case the reflected wavelets cannot be recovered due to shape-altering processing or a low signal-to-noise ratio, the energy method uses the reflection strength to group together subparallel horizons within the same energy package and to select those satisfying either energy or arrival time criteria. These methods can be applied automatically to all the picked horizons or to horizons individually selected by the interpreter for specific analysis. We show examples of application to 2D reflection seismic data sets in complex geologic and stratigraphic conditions, critically reviewing the performance of the whole process.

Geophysics ◽  
1958 ◽  
Vol 23 (3) ◽  
pp. 557-573 ◽  
Author(s):  
M. Pieuchot ◽  
H. Richard

The small signal‐to‐noise ratio encountered in the Sahara required the development of special techniques. The gentle dips and low frequencies permitted the use of a pattern of 100 shot holes recorded by an array of 100 or more geophones per trace with the linear dimensions of the arrays of the order of 100 m. The large structural dimensions allowed the compositing of as many as 5 records into a single trace. Seismic reflection exploration was made economically feasible by the use of pneumatic hammers for drilling and the less expensive nitrates for explosives. The experimental procedures leading to the selection of the techniques are described.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. O1-O7 ◽  
Author(s):  
Wen-kai Lu ◽  
Chang-Kai Zhang

The instantaneous phase estimated by the Hilbert transform (HT) is susceptible to noise; we propose a robust approach for the estimation of instantaneous phase in noisy situations. The main procedure of the proposed method is applying an adaptive filter in time-frequency domain and calculating the analytic signal. By supposing that one frequency component with higher amplitude has higher signal-to-noise ratio, a zero-phase adaptive filter, which is constructed by using the time-frequency amplitude spectrum, enhances the frequency components with higher amplitudes and suppresses those with lower amplitudes. The estimation of instantaneous frequency, which is defined as the derivative of instantaneous phase, is also improved by the proposed robust instantaneous phase estimation method. Synthetic and field data sets are used to demonstrate the performance of the proposed method for the estimation of instantaneous phase and frequency, compared by the HT and short-time-Fourier-transform methods.


1995 ◽  
Vol 31 (2) ◽  
pp. 193-204 ◽  
Author(s):  
Koen Grijspeerdt ◽  
Peter Vanrolleghem ◽  
Willy Verstraete

A comparative study of several recently proposed one-dimensional sedimentation models has been made. This has been achieved by fitting these models to steady-state and dynamic concentration profiles obtained in a down-scaled secondary decanter. The models were evaluated with several a posteriori model selection criteria. Since the purpose of the modelling task is to do on-line simulations, the calculation time was used as one of the selection criteria. Finally, the practical identifiability of the models for the available data sets was also investigated. It could be concluded that the model of Takács et al. (1991) gave the most reliable results.


Author(s):  
Christian Luksch ◽  
Lukas Prost ◽  
Michael Wimmer

We present a real-time rendering technique for photometric polygonal lights. Our method uses a numerical integration technique based on a triangulation to calculate noise-free diffuse shading. We include a dynamic point in the triangulation that provides a continuous near-field illumination resembling the shape of the light emitter and its characteristics. We evaluate the accuracy of our approach with a diverse selection of photometric measurement data sets in a comprehensive benchmark framework. Furthermore, we provide an extension for specular reflection on surfaces with arbitrary roughness that facilitates the use of existing real-time shading techniques. Our technique is easy to integrate into real-time rendering systems and extends the range of possible applications with photometric area lights.


2017 ◽  
Vol 21 (9) ◽  
pp. 4747-4765 ◽  
Author(s):  
Clara Linés ◽  
Micha Werner ◽  
Wim Bastiaanssen

Abstract. The implementation of drought management plans contributes to reduce the wide range of adverse impacts caused by water shortage. A crucial element of the development of drought management plans is the selection of appropriate indicators and their associated thresholds to detect drought events and monitor the evolution. Drought indicators should be able to detect emerging drought processes that will lead to impacts with sufficient anticipation to allow measures to be undertaken effectively. However, in the selection of appropriate drought indicators, the connection to the final impacts is often disregarded. This paper explores the utility of remotely sensed data sets to detect early stages of drought at the river basin scale and determine how much time can be gained to inform operational land and water management practices. Six different remote sensing data sets with different spectral origins and measurement frequencies are considered, complemented by a group of classical in situ hydrologic indicators. Their predictive power to detect past drought events is tested in the Ebro Basin. Qualitative (binary information based on media records) and quantitative (crop yields) data of drought events and impacts spanning a period of 12 years are used as a benchmark in the analysis. Results show that early signs of drought impacts can be detected up to 6 months before impacts are reported in newspapers, with the best correlation–anticipation relationships for the standard precipitation index (SPI), the normalised difference vegetation index (NDVI) and evapotranspiration (ET). Soil moisture (SM) and land surface temperature (LST) offer also good anticipation but with weaker correlations, while gross primary production (GPP) presents moderate positive correlations only for some of the rain-fed areas. Although classical hydrological information from water levels and water flows provided better anticipation than remote sensing indicators in most of the areas, correlations were found to be weaker. The indicators show a consistent behaviour with respect to the different levels of crop yield in rain-fed areas among the analysed years, with SPI, NDVI and ET providing again the stronger correlations. Overall, the results confirm remote sensing products' ability to anticipate reported drought impacts and therefore appear as a useful source of information to support drought management decisions.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Jun Jiang ◽  
Lianping Guo ◽  
Kuojun Yang ◽  
Huiqing Pan

Vertical resolution is an essential indicator of digital storage oscilloscope (DSO) and the key to improving resolution is to increase digitalizing bits and lower noise. Averaging is a typical method to improve signal to noise ratio (SNR) and the effective number of bits (ENOB). The existing averaging algorithm is apt to be restricted by the repetitiveness of signal and be influenced by gross error in quantization, and therefore its effect on restricting noise and improving resolution is limited. An information entropy-based data fusion and average-based decimation filtering algorithm, proceeding from improving average algorithm and in combination with relevant theories of information entropy, are proposed in this paper to improve the resolution of oscilloscope. For single acquiring signal, resolution is improved through eliminating gross error in quantization by utilizing the maximum entropy of sample data with further noise filtering via average-based decimation after data fusion of efficient sample data under the premise of oversampling. No subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analog bandwidth of oscilloscope under actual sampling rate.


2020 ◽  
Vol 24 (3) ◽  
pp. 251-264
Author(s):  
Paula Lacomba Montes ◽  
Alejandro Campos Uribe

This paper reports on the primary school design processes carried out around the 1940s in the County of Hertfordshire in Great Britain, which later evolved into innovative strategies developed by Mary and David Medd in the Ministry of Education from the late 1950s. The whole process, undertaken during more than three decades, reveals a way of breaking with the traditional spatial conception of a school. The survey of the period covered has allowed an in-depth understanding of how learning spaces could be transformed by challenging the conventional school model of closed rooms, suggesting a new way of understanding learning spaces as a group of Centres rather than classrooms. Historians have thoroughly shown the ample scope of this process, which involved many professionals, fostering a true cross-disciplinary endeavour where the curriculum and the learning spaces were developed in close collaboration. A selection of schools built in the county has been used to typologically analyse how architectural changes began to arise and later flourished at the Ministry of Education. The Medds had indeed a significant role through the development of a design process known as the Built-in variety and the Planning Ingredients. A couple of examples will clarify some of these strategies, revealing how the design of educational space could successfully respond to an active way of learning.


Author(s):  
Anastasiia Ivanitska ◽  
Dmytro Ivanov ◽  
Ludmila Zubik

The analysis of the available methods and models of formation of recommendations for the potential buyer in network information systems for the purpose of development of effective modules of selection of advertising is executed. The effectiveness of the use of machine learning technologies for the analysis of user preferences based on the processing of data on purchases made by users with a similar profile is substantiated. A model of recommendation formation based on machine learning technology is proposed, its work on test data sets is tested and the adequacy of the RMSE model is assessed. Keywords: behavior prediction; advertising based on similarity; collaborative filtering; matrix factorization; big data; machine learning


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


Sign in / Sign up

Export Citation Format

Share Document