scholarly journals COVARIANCE FUNCTION MODELLING IN LOCAL GEODETIC APPLICATIONS USING THE SIMPLEX METHOD

2016 ◽  
Vol 22 (2) ◽  
pp. 342-357
Author(s):  
Carlo Iapige De Gaetani ◽  
Noemi Emanuela Cazzaniga ◽  
Riccardo Barzaghi ◽  
Mirko Reguzzoni ◽  
Barbara Betti

Collocation has been widely applied in geodesy for estimating the gravity field of the Earth both locally and globally. Particularly, this is the standard geodetic method used to combine all the available data to get an integrated estimate of any functional of the anomalous potential T. The key point of the method is the definition of proper covariance functions of the data. Covariance function models have been proposed by many authors together with the related software. In this paper a new method for finding suitable covariance models has been devised. The covariance fitting problem is reduced to an optimization problem in Linear Programming and solved by using the Simplex Method. The procedure has been implemented in a FORTRAN95 software and has been tested on simulated and real data sets. These first tests proved that the proposed method is a reliable tool for estimating proper covariance function models to be used in the collocation procedure

2003 ◽  
Vol 1 ◽  
pp. 143-147 ◽  
Author(s):  
D. Arabelos ◽  
C. C. Tscherning

Abstract. Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation), the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-2)2. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`


Author(s):  
SUNITHA YEDDULA ◽  
K. LAKSHMAIAH

Record linkage is the process of matching records from several databases that refer to the same entities. When applied on a single database, this process is known as deduplication. Increasingly, matched data are becoming important in many applications areas, because they can contain information that is not available otherwise, or that is too costly to acquire. Removing duplicate records in a single database is a crucial step in the data cleaning process, because duplicates can severely influence the outcomes of any subsequent data processing or data mining. With the increasing size of today’s databases, the complexity of the matching process becomes one of the major challenges for record linkage and deduplication. In recent years, various indexing techniques have been developed for record linkage and deduplication. They are aimed at reducing the number of record pairs to be compared in the matching process by removing obvious nonmatching pairs, while at the same time maintaining high matching quality. This paper presents a survey of variations of six indexing techniques. Their complexity is analyzed, and their performance and scalability is evaluated within an experimental framework using both synthetic and real data sets. These experiments highlight that one of the most important factors for efficient and accurate indexing for record linkage and deduplication is the proper definition of blocking keys.


Geophysics ◽  
2003 ◽  
Vol 68 (1) ◽  
pp. 168-180 ◽  
Author(s):  
Valentine Mikhailov ◽  
Armand Galdeano ◽  
Michel Diament ◽  
Alexei Gvishiani ◽  
Sergei Agayan ◽  
...  

Results of Euler deconvolution strongly depend on the selection of viable solutions. Synthetic calculations using multiple causative sources show that Euler solutions cluster in the vicinity of causative bodies even when they do not group densely about the perimeter of the bodies. We have developed a clustering technique to serve as a tool for selecting appropriate solutions. The clustering technique uses a methodology based on artificial intelligence, and it was originally designed to classify large data sets. It is based on a geometrical approach to study object concentration in a finite metric space of any dimension. The method uses a formal definition of cluster and includes free parameters that search for clusters of given properties. Tests on synthetic and real data showed that the clustering technique successfully outlines causative bodies more accurately than other methods used to discriminate Euler solutions. In complex field cases, such as the magnetic field in the Gulf of Saint Malo region (Brittany, France), the method provides dense clusters, which more clearly outline possible causative sources. In particular, it allows one to trace offshore the main inland tectonic structures and to study their interrelationships in the Gulf of Saint Malo. The clusters provide solutions associated with particular bodies, or parts of bodies, allowing the analysis of different clusters of Euler solutions separately. This may allow computation of average parameters for individual causative bodies. Those measurements of the anomalous field that yield clusters also form dense clusters themselves. Application of this clustering technique thus outlines areas where the influence of different causative sources is more prominent. This allows one to focus on these areas for more detailed study, using different window sizes, structural indices, etc.


2005 ◽  
Vol 37 (3) ◽  
pp. 706-725 ◽  
Author(s):  
Chunsheng Ma

Variograms and covariance functions are the fundamental tools for modeling dependent data observed over time, space, or space-time. This paper aims at constructing nonseparable spatio-temporal variograms and covariance models. Special attention is paid to an intrinsically stationary spatio-temporal random field whose covariance function is of Schoenberg-Lévy type. The correlation structure is studied for its increment process and for its partial derivative with respect to the time lag, as well as for the superposition over time of a stationary spatio-temporal random field. As another approach, we investigate the permissibility of the linear combination of certain separable spatio-temporal covariance functions to be a valid covariance, and obtain a subclass of stationary spatio-temporal models isotropic in space.


2005 ◽  
Vol 37 (03) ◽  
pp. 706-725 ◽  
Author(s):  
Chunsheng Ma

Variograms and covariance functions are the fundamental tools for modeling dependent data observed over time, space, or space-time. This paper aims at constructing nonseparable spatio-temporal variograms and covariance models. Special attention is paid to an intrinsically stationary spatio-temporal random field whose covariance function is of Schoenberg-Lévy type. The correlation structure is studied for its increment process and for its partial derivative with respect to the time lag, as well as for the superposition over time of a stationary spatio-temporal random field. As another approach, we investigate the permissibility of the linear combination of certain separable spatio-temporal covariance functions to be a valid covariance, and obtain a subclass of stationary spatio-temporal models isotropic in space.


2021 ◽  
Vol 5 (1) ◽  
pp. 37
Author(s):  
Till Schubert ◽  
Jan Martin Brockmann ◽  
Johannes Korte ◽  
Wolf-Dieter Schuh

In time series analyses, covariance modeling is an essential part of stochastic methods such as prediction or filtering. For practical use, general families of covariance functions with large flexibilities are necessary to model complex correlations structures such as negative correlations. Thus, families of covariance functions should be as versatile as possible by including a high variety of basis functions. Another drawback of some common covariance models is that they can be parameterized in a way such that they do not allow all parameters to vary. In this work, we elaborate on the affiliation of several established covariance functions such as exponential, Matérn-type, and damped oscillating functions to the general class of covariance functions defined by autoregressive moving average (ARMA) processes. Furthermore, we present advanced limit cases that also belong to this class and enable a higher variability of the shape parameters and, consequently, the representable covariance functions. For prediction tasks in applications with spatial data, the covariance function must be positive semi-definite in the respective domain. We provide conditions for the shape parameters that need to be fulfilled for positive semi-definiteness of the covariance function in higher input dimensions.


Author(s):  
ARMAGHAN HEIDARZADE ◽  
NEZAM MAHDAVI-AMIRI ◽  
IRAJ MAHDAVI

Type-2 fuzzy sets are generalizations of ordinary fuzzy sets, in which membership grades are characterized by fuzzy membership functions. Here, a problem of finding distance between two interval type-2 fuzzy sets (IT2-FSs) was considered. Based on a new definition of centroid for an IT2-FS, a formulation for calculation of the distance between two IT2-FSs was introduced, and an algorithm was explained to obtain it. The proposed distance formula was incorporated in Yang and Shih's clustering algorithm to reach a clustering method for interval type-2 fuzzy data sets. The applicability of the proposed distance formula was evaluated using two artificial and real data sets, and reasonable results were obtained.


2003 ◽  
Vol 8 (4) ◽  
pp. 283-290 ◽  
Author(s):  
E. Lesauskiene ◽  
K. Dučinskas

In this article we have used wide applicable classes of spatio‐temporal nonseparable and separable covariance models. One of the objectives of this paper is to furnish a possibility how to avoid the usage of complicated covariance functions. Assuming regression model for mean function the analytical expressions for the optimal linear prediction (universal kriging) and mean squared prediction error (MSPE) was obtained. Parameterized spatio‐temporal covariance functions were fitted for the real data. Prediction values and MSPE were presented. For visualization of results on graphics are used free available software Gstat.


Author(s):  
S. H. Alizadeh Moghaddam ◽  
M. Mokhtarzade ◽  
A. Alizadeh Naeini ◽  
S. A. Alizadeh Moghaddam

Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs’ parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs’ overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs’ parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50–80% over the TR.


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


Sign in / Sign up

Export Citation Format

Share Document