Memory-assisted compression of seismic data: Tackling a large alphabet-size problem by statistical methods

Author(s):  
Ali Payani ◽  
Afshin Abdi ◽  
Faramarz Fekri
2012 ◽  
Vol 23 (05) ◽  
pp. 969-984 ◽  
Author(s):  
SABINE BRODA ◽  
ANTÓNIO MACHIAVELO ◽  
NELMA MOREIRA ◽  
ROGÉRIO REIS

In this paper, the relation between the Glushkov automaton [Formula: see text] and the partial derivative automaton [Formula: see text] of a given regular expression, in terms of transition complexity, is studied. The average transition complexity of [Formula: see text] was proved by Nicaud to be linear in the size of the corresponding expression. This result was obtained using an upper bound of the number of transitions of [Formula: see text]. Here we present a new quadratic construction of [Formula: see text] that leads to a more elegant and straightforward implementation, and that allows the exact counting of the number of transitions. Based on that, a better estimation of the average size is presented. Asymptotically, and as the alphabet size grows, the number of transitions per state is on average 2. Broda et al. computed an upper bound for the ratio of the number of states of [Formula: see text] to the number of states of [Formula: see text] which is about ½ for large alphabet sizes. Here we show how to obtain an upper bound for the number of transitions in [Formula: see text], which we then use to get an average case approximation. In conclusion, assymptotically, and for large alphabets, the size of [Formula: see text] is half the size of the [Formula: see text]. This is corroborated by some experiments, even for small alphabets and small regular expressions.


Integers ◽  
2011 ◽  
Vol 11 (6) ◽  
Author(s):  
Stefan Gerhold

AbstractWe investigate the number of sets of words that can be formed from a finite alphabet, counted by the total length of the words in the set. An explicit expression for the counting sequence is derived from the generating function, and asymptotics for large alphabet size and large total word length are discussed. Moreover, we derive a Gaussian limit law for the number of words in a random finite language.


Geophysics ◽  
2010 ◽  
Vol 75 (1) ◽  
pp. P1-P9 ◽  
Author(s):  
Osama A. Ahmed ◽  
Radwan E. Abdel-Aal ◽  
Husam AlMustafa

Statistical methods, such as linear regression and neural networks, are commonly used to predict reservoir properties from seismic attributes. However, a huge number of attributes can be extracted from seismic data and an efficient method for selecting an attribute subset with the highest correlation to the property being predicted is essential. Most statistical methods, however, lack an optimized approach for this attribute selection. We propose to predict reservoir properties from seismic attributes using abductive networks, which use iterated polynomial regression to derive high-degree polynomial predictors. The abductive networks simultaneously select the most relevant attributes and construct an optimal nonlinear predictor. We applied the approach to predict porosity from seismic data of an area within the 'Uthmaniyah portion of the Ghawar oil field, Saudi Arabia. The data consisted of normal seismic amplitude, acoustic impedance, 16 other seismic attributes, and porosity logs from seven wells located in the study area. Out of 27 attributes, the abductive network selected only the best two to six attributes and produced a more accurate and robust porosity prediction than using the more common neural-network predictors. In addition, the proposed method requires no effort in choosing the attribute subset or tweaking their parameters.


2020 ◽  
Vol 66 (3) ◽  
pp. 1474-1481
Author(s):  
Hamed Narimani ◽  
Mohammadali Khosravifard
Keyword(s):  

Geophysics ◽  
1967 ◽  
Vol 32 (3) ◽  
pp. 414-414
Author(s):  
Daniel Silverman

In recent years there has been a great surge of interest in the geophysical industry in the digital processing of seismic data. This activity involves the application of statistical methods of analysis of time series. In a way it is a part of the general subject of communication theory. However, the direction this work has taken is in many respects quite divergent from communication theory as it is used in the communications industry. The divergence is so great, in fact, that if it were not for a small group of workers in this field in the early 1950's, it is doubtful whether we would today be in a position to do what is now rapidly becoming standard operating practice in the geophysical industry.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


1973 ◽  
Vol 18 (11) ◽  
pp. 562-562
Author(s):  
B. J. WINER
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document