A DISCUSSION OF STEEP‐DIP SEISMIC COMPUTING METHODS

Geophysics ◽  
1949 ◽  
Vol 14 (2) ◽  
pp. 109-122 ◽  
Author(s):  
R. B. Rice

This paper presents a comparison of the results obtained by the application of several standard computing techniques to the computation of steep‐dip seismic data. Formulas for the horizontal displacements and depths of reflection points are derived for the various methods, assuming velocity to be a parabolic function of depth, and the results obtained for a typical velocity function over a wide range of reflection and stepout times are compared graphically. In addition, the overall effect of applying these methods to the computation of a specific steep‐dip asymmetric structural profile is studied.

Geophysics ◽  
1950 ◽  
Vol 15 (1) ◽  
pp. 80-93 ◽  
Author(s):  
R. B. Rice

This paper is a continuation of a previous one in which the author gave a comparison of several commonly used steep‐dip seismic computing techniques on the basis of the horizontal displacements and depths of the reflection points they produce, assuming a rather “slow” parabolic increase of velocity with depth. In this paper, the comparison of these methods is extended to the case of a second and considerably “faster” velocity function and to the computation of a reflecting horizon through a fault zone. In addition, two other straight‐path methods and three purely mathematical methods are introduced and compared in the same manner with the curved‐path method.


2004 ◽  
Vol 126 (3) ◽  
pp. 473-481 ◽  
Author(s):  
B. Jacod ◽  
C. H. Venner ◽  
P. M. Lugt

The effect of longitudinal roughness on the friction in EHL contacts is investigated by means of numerical simulations. In the theoretical model the Eyring equation is used to describe the rheological behavior of the lubricant. First the relative friction variation caused by a single harmonic roughness component is computed as a function of the amplitude and wavelength for a wide range of operating conditions. From the results a curve fit formula is derived for the relative friction variation as a function of the out-of-contact geometry of the waviness and a newly derived parameter characterizing the response of the lubricant to pressure variations. Subsequently, the case of a superposition of two harmonic components is considered. It is shown that for the effect on friction such a combined pattern can be represented by a single equivalent wave. The amplitude and the wavelength of the equivalent wave can be determined from a nonlinear relation in terms of the amplitudes and wavelengths of the individual harmonic components. Finally the approach is applied to the prediction of the effect of a real roughness profile (many components) on the friction. From a comparison of the results with full numerical simulations it appears that the simplified approach is quite accurate.


Author(s):  
С.А. Свердлов

Проблема понимания текстов и способов его оценки является одной из фундаментальных для современной психологии, педагогики, лингвистики. На настоящий момент имеется большое количество попыток выработки метода оценки понимания текстов, который бы отражал объективную картину интериоризации материала текста читателем, и вместе с тем подходил бы для стандартизации и применения на широком диапазоне текстов различного содержания и структуры, по возможности с минимальными затратами ресурсов оценщика и самого читателя. В настоящей статье раскрываются современные подходы к оценке понимания текстов, их преимущества и недостатки, и формулируется новый перспективный метод оценки понимания текстов, использующий в своём основании модель Латентного размещения Дирихле. Данный метод затем проходит исследование валидности через сравнение результатов его применения и наиболее часто используемых современных методов оценки понимания текстов, делается вывод о его применимости в реальных условиях и перспективах использования в спектре прикладных задач. The problem of understanding texts and methods of assessing it is one of the fundamental for modern psychology, pedagogy, linguistics. At the moment, there are a large number of attempts to develop a method for assessing the understanding of texts, which would reflect an objective picture of the internalization of the material of the text by the reader, and at the same time would be suitable for standardization and application on a wide range of texts of different content and structure, if possible with minimal expenditure of resources of the evaluator and the reader himself. This article reveals modern approaches to assessing text understanding, their advantages and disadvantages, and formulates a new promising method for assessing text understanding, based on the Dirichlet Latent Placement model. This method then goes through a validity study through a comparison of the results of its application and the most commonly used modern methods for assessing text understanding, a conclusion is made about its applicability in real conditions and the prospects for use in a range of applied problems.


Author(s):  
Awder Mohammed Ahmed ◽  
◽  
Adnan Mohsin Abdulazeez ◽  

Multi-label classification addresses the issues that more than one class label assigns to each instance. Many real-world multi-label classification tasks are high-dimensional due to digital technologies, leading to reduced performance of traditional multi-label classifiers. Feature selection is a common and successful approach to tackling this problem by retaining relevant features and eliminating redundant ones to reduce dimensionality. There is several feature selection that is successfully applied in multi-label learning. Most of those features are wrapper methods that employ a multi-label classifier in their processes. They run a classifier in each step, which requires a high computational cost, and thus they suffer from scalability issues. Filter methods are introduced to evaluate the feature subsets using information-theoretic mechanisms instead of running classifiers to deal with this issue. Most of the existing researches and review papers dealing with feature selection in single-label data. While, recently multi-label classification has a wide range of real-world applications such as image classification, emotion analysis, text mining, and bioinformatics. Moreover, researchers have recently focused on applying swarm intelligence methods in selecting prominent features of multi-label data. To the best of our knowledge, there is no review paper that reviews swarm intelligence-based methods for multi-label feature selection. Thus, in this paper, we provide a comprehensive review of different swarm intelligence and evolutionary computing methods of feature selection presented for multi-label classification tasks. To this end, in this review, we have investigated most of the well-known and state-of-the-art methods and categorize them based on different perspectives. We then provided the main characteristics of the existing multi-label feature selection techniques and compared them analytically. We also introduce benchmarks, evaluation measures, and standard datasets to facilitate research in this field. Moreover, we performed some experiments to compare existing works, and at the end of this survey, some challenges, issues, and open problems of this field are introduced to be considered by researchers in the future.


1975 ◽  
Vol 15 (1) ◽  
pp. 81
Author(s):  
W. Pailthorpe ◽  
J. Wardell

During the past two years, much publicity has been given to the direct indication of hydrocarbon accumulations by "Bright Spot" reflections: the very high amplitude reflections from a shale to gas-sand or gas-sand to water-sand interface. It was soon generally realised, however, that this phenomenon was of limited occurrence, being mostly restricted to young, shallow, sand and shale sequences such as the United States Gulf Coast. A more widely detectable indication of hydrocarbons was found to be the reflection from a fluid interface, such as the gas to water interface, within the reservoir. This reflection is characterised by its flatness, being a fluid interface, and is often called the "Flat Spot".Model studies show that the flat spots have a wide range of amplitudes, from very high for shallow gas to water contacts, to very low for deep oil to water contacts. However, many of the weaker flat spots on good recent marine seismic data have an adequate signal to random noise ratio for detection, and the problem is to separate and distinguish them from the other stronger reflections close by. In this respect the unique flatness of the fluid contact reflection can be exploited by dip discriminant processes, such as velocity filtering, to separate it from the generally dipping reflectors at its boundaries. A limiting factor in the detection of the deeper flat spots is the frequency bandwidth of the seismic data. Since the separation between the flat spot reflection and the upper and lower boundary reflections of the reservoir is often small, relatively high frequency data are needed to resolve these separate reflections. Correct display of the seismic data can be critical to flat spot detection, and some degree of vertical exaggeration of the seismic section is often required to increase apparent dips, and thus make the flat spots more noticeable.The flat spot is generally a smaller target than the structural features that conventional seismic surveys are designed to find and map, and so a denser than normal grid of seismic lines is required adequately to map most flat spots.


Geophysics ◽  
2020 ◽  
pp. 1-104
Author(s):  
Volodya Hlebnikov ◽  
Thomas Elboth ◽  
Vetle Vinje ◽  
Leiv-J. Gelius

The presence of noise in towed marine seismic data is a long-standing problem. The various types of noise present in marine seismic records are never truly random. Instead, seismic noise is more complex and often challenging to attenuate in seismic data processing. Therefore, we examine a wide range of real data examples contaminated by different types of noise including swell noise, seismic interference noise, strumming noise, passing vessel noise, vertical particle velocity noise, streamer hit and fishing gear noise, snapping shrimp noise, spike-like noise, cross-feed noise and streamer mounted devices noise. The noise examples investigated focus only on data acquired with analogue group-forming. Each noise type is classified based on its origin, coherency and frequency content. We then demonstrate how the noise component can be effectively attenuated through industry standard seismic processing techniques. In this tutorial, we avoid presenting the finest details of either the physics of the different types of noise themselves or the noise attenuation algorithms applied. Rather, we focus on presenting the noise problems themselves and show how well the community is able to address such noise. Our aim is that based on the provided insights, the geophysical community will be able to gain an appreciation of some of the most common types of noise encountered in marine towed seismic, in the hope to inspire more researchers to focus their attention on noise problems with greater potential industry impact.


1995 ◽  
Vol 10 ◽  
pp. 331-332
Author(s):  
F.J. Rogers

The equation of state of astrophysical plasmas is, for a wide range of stars, nearly ideal with only small non-ideal Coulomb corrections. Calculating the equation of state of an ionizing plasma from a ground state ion, ideal gas model is easy, whereas, fundamental methods to include the small Coulomb corrections are difficult. Attempts to include excited bound states are also complicated by many-body effects that weaken and broaden these states. Nevertheless, the high quality of current observations, particularly seismic data, dictates that the best possible models should be used. The equation of state used in the OPAL opacity tables is based on many-body quantum statistical methods (Rogers, 1994; 1986; 1981) and is suitable for the modeling of seismic data. Extensive tables of the OPAL equation of state are now available. These tables cover the temperature range 5 × 10−3 to 1 × 108 K, the density range 10-14 to 105 g/cm3, the hydrogen mass fraction (X) range 0.0 to 0.8, and the metallicity (Z) range 0.0 to 0.04.


Author(s):  
H. K. Moon ◽  
B. Glezer

In spite of very significant progress in analytical and numerical methods during recent years, experimental techniques are still essential tools for the development of cooled turbine nozzles. This paper describes the major elements of the development process for cooled turbine nozzles with a primary emphasis on advanced experimental heat transfer techniques. Thermochromic liquid crystals were used to measure the internal (coolant side) heat transfer coefficients of a practical vane cooling design which has a combination of different heat transfer augmenting devices. A comparison of the results and analytical predictions provided validations of existing correlations which were developed from the generic cases (usually one type of augmenting device). The overall cooling design was evaluated in a full-scale annular hot cascade which maintained heat transfer similarity. The freestream turbulence level was measured with an in-house developed heat flux probe. Cooling effectiveness distribution was evaluated from the surface metal temperatures mapped with an in-house developed wide range temperature pyrometer. The test results led to the fine-tuning of the nozzle vane cooling design.


2020 ◽  
Vol 10 (20) ◽  
pp. 7037 ◽  
Author(s):  
Robert Ojstersek ◽  
Borut Buchmeister ◽  
Natasa Vujica Herzog

In the time of Industry 4.0, the dynamic adaptation of companies to global market demands plays a key role in ensuring sustainable financial and time justification. Financial accessibility, a wide range of user-friendliness, and credible results of the visual computing methods and data-driven simulation modeling enable a higher degree of usability in small, medium, and large enterprises. This paper presents an innovative method for modelling and simulating workplaces in manufacturing based on visual data captured with a spherical camera. The presented approach uses simulation scenarios to investigate the optimization of manual or collaborative workplaces. We evaluated and compared three simulated scenarios, the results of which highlight the potential for improvement regarding manufacturing productivity and cost. In addition, ergonomic analyses of a manual assembly workplace were performed using existing evaluation metrics. The results show the possibility of creating a three-dimensional model of a workplace captured with a spherical camera, which not only describes the model dimensionally but also adds terminological and other production parameters obtained through the analysis of manufacturing system videos. The confirmation of the appropriateness of introducing collaborative workstations is also confirmed by ergonomic analyses Ovaco working analyzing system (OWAS) and rapid upper limb assessment (RULA), which demonstrate the sustainable limits of manual assembly workplaces.


Geophysics ◽  
1981 ◽  
Vol 46 (2) ◽  
pp. 106-120 ◽  
Author(s):  
Frank J. Feagin

Relatively little attention has been paid to the final output of today’s sophisticated seismic data processing procedures—the seismic section display. We first examine significant factors relating to those displays and then describe a series of experiments that, by varying those factors, let us specify displays that maximize interpreters’ abilities to detect reflections buried in random noise. The study.—From psychology of perception and image enhancement literature and from our own research, these conclusions were reached: (1) Seismic reflection perceptibility is best for time scales in the neighborhood of 1.875 inches/sec because, for common seismic frequencies, the eye‐brain spatial frequency response is a maximum near that value. (2) An optimized gray scale for variable density sections is nonlinearly related to digital data values on a plot tape. The nonlinearity is composed to two parts (a) that which compensates for nonlinearity inherent in human perception, and (b) the nonlinearity required to produce histogram equalization, a modern image enhancement technique. The experiments.—The experiments involved 37 synthetic seismic sections composed of simple reflections embedded in filtered random noise. Reflection signal‐to‐noise (S/N) ratio was varied over a wide range, as were other display parameters, such as scale, plot mode, photographic density contrast, gray scale, and reflection dip angle. Twenty‐nine interpreters took part in the experiments. The sections were presented, one at a time, to each interpreter; the interpreter then proceeded to mark all recognizable events. Marked events were checked against known data and errors recorded. Detectability thresholds in terms of S/N ratios were measured as a function of the various display parameters. Some of the more important conclusions are: (1) With our usual types of displays, interpreters can pick reflections about 6 or 7 dB below noise with a 50 percent probability. (2) Perceptibility varies from one person to another by 2.5 to 3.0 dB. (3) For displays with a 3.75 inch/sec scale and low contrast photographic paper (a common situation), variable density (VD) and variable area‐wiggly trace (VA‐WT) sections are about equally effective from a perceptibility standpoint. (4) However, for displays with small scales and for displays with higher contrast, variable density is significantly superior. A VD section with all parameters optimized shows about 8 dB perceptibility advantage over an optimized VA‐WT section. (5) Detectability drops as dip angle increases. VD is slightly superior to VA‐WT, even at large scales, for steep dip angles. (6) An interpreter gains typically about 2 dB by foreshortening, although there is a wide variation from one individual to another.


Sign in / Sign up

Export Citation Format

Share Document