Assessment of Tscherning-Rapp covariance in Earth gravity modeling using gravity gradient and GPS/leveling observations

Author(s):  
Hadi Heydarizadeh Shali ◽  
Sabah Ramouz ◽  
Abdolreza Safari ◽  
Riccardo Barzaghi

<p>Determination of Earth’s gravity field in a high accuracy needs different complementary data and also methods to combine these data in an optimized procedure. Newly invented resources such as GPS, GRACE, and GOCE provide various data with different distribution which makes it possible to reach this aim. Least Squares Collocation (LSC) is one of the methods that help to mix different data types via covariance function which correlates the different involved parameters within the procedure. One way to construct such covariance functions is involving two steps within the remove-compute-restore (RCR) procedure: first, calculation of an empirical covariance function from observations which the gravitational effects of global gravity field (Long-wavelength) and topography/bathymetry have been subtracted from it and then fitting the Tscherning–Rapp analytical covariance model to the empirical one. According to the corresponding studies, the accuracy of LSC is directly related to the ability to localize the covariance function which itself depends on the data distribution. In this study, we have analyzed the data distribution and geometrically fitting factors, on GPS/Leveling and GOCE gradient data by considering the various case studies with different data distributions. To make the assessment of the covariance determination possible, the residual observations were divided into two datasets namely, observations and control points. The observations point served as input data within the LSC procedure using the Tscherning – Rapp covariance model and the control points used to evaluate the accuracy of the LSC in gravity gradient, gravity anomaly, and geoid predicting and then the covariance estimation. The results of this study show that the Tscherning-Rapp (1974) covariance has different outcomes over different quantities. For example, it models accurate enough the empirical covariance of gradient gravity but requires more analysis for gravity anomalies and GPS/Leveling quantities to reach the optimized results in terms of STD of difference between the computed and control points.</p>

2020 ◽  
Vol 50 ◽  
pp. 65-75
Author(s):  
Sabah Ramouz ◽  
Yosra Afrasteh ◽  
Mirko Reguzzoni ◽  
Abdolreza Safari

Abstract. Covariance determination as the heart of Least Squares Collocation gravity field modeling is based on fitting an analytical covariance to the empirical covariance, which is stemmed from gravimetric data. The main objective of this study is to process different local covariance strategies over four regions with different topography and spatial data distribution in Iran. For this purpose, Least Squares Collocation based on Remove – Compute – Restore technique is implemented. In the Remove step, gravity reduction in regions with a denser distribution and a rougher topography is more effective. In the Compute step, the assessment of the Collocation estimates on the gravity anomaly control points illustrates that data density is more relevant than topography roughness to have a good covariance determination. Moreover, among the different attempts of localizing the covariance estimation, a recursive approach correcting the covariance parameters based on the agreement between Least Squares Collocation estimates and control points shows better performance. Furthermore, we could see that covariance localization in a region with sparse or bad distributed observations is a challenging task and may not necessarily improve the Collocation gravity modeling. Indeed, the geometrical fitness of the empirical and analytical covariances – which is usually a qualitative test to verify the precision of the covariance determination – is not always an adequate criterion.


Author(s):  
Roman Flury ◽  
Reinhard Furrer

AbstractWe discuss the experiences and results of the AppStatUZH team’s participation in the comprehensive and unbiased comparison of different spatial approximations conducted in the Competition for Spatial Statistics for Large Datasets. In each of the different sub-competitions, we estimated parameters of the covariance model based on a likelihood function and predicted missing observations with simple kriging. We approximated the covariance model either with covariance tapering or a compactly supported Wendland covariance function.


2021 ◽  
Author(s):  
Dominik Hirling ◽  
Peter Horvath

Cell segmentation is a fundamental problem in biology for which convolutional neural networks yield the best results nowadays. In this paper, we present HarmonicNet, a network, which is a modification of the popular StarDist and SplineDist architectures. While StarDist and SplineDist describe an object by the lengths of equiangular rays and control points respectively, our network utilizes Fourier descriptors, predicting a coefficient vector for every pixel on the image, which implicitly define the resulting segmentation. We evaluate our model on three different datasets, and show that Fourier descriptors can achieve a high level of accuracy with a small number of coefficients. HarmonicNet is also capable of accurately segmenting objects that are not star-shaped, a case where StarDist performs suboptimally according to our experiments.


1988 ◽  
Vol 51 (5) ◽  
pp. 373-383 ◽  
Author(s):  
FRANK L. BRYAN ◽  
SILVIA C. MICHANIE ◽  
PERSIA ALVAREZ ◽  
AURELIO PANIAGUA

Hazard analyses were conducted at four street-vending stands in the Dominican Republic. Temperatures of foods were measured during cooking, display (holding), and reheating (when done). Samples were taken at each step of the operation and at 5 to 6-h intervals during display. Foods usually attained temperatures that exceeded 90°C at the geometric center during cooking and reheating. At three of the stands, foods (e.g., fish, chickens, pork pieces) were fried and held until sold. Leftovers were held overnight at ambient temperatures in the home of the vendor or in a locked compartment of the stand. They were usually reheated early in the morning and displayed until sold. During the interval of holding, aerobic mesophilic counts progressively increased with time from about 103 after cooking to between 105 to 109/g. The higher counts were usually associated with holding overnight. Escherichia coli (in water, milk and cheese samples), Bacillus cereus (in bean and rice samples), and Clostridium perfringens (in meat, chicken and bean samples) were isolated, but usually in numbers less than 103/g. At the other stand, foods (e.g. beans, rice, meat and chicken) were cooked just before serving as complete meals. There were no leftovers. This operation was less hazardous, although there were many sanitary deficiencies. Recommendations for prevention and control of microbial hazards (mainly reducing holding time, periodic reheating and requesting reheating just before purchasing) are given. The need and suggestions for implementing educational activities to alert and inform those concerned about hazards and preventive measures are presented.


Author(s):  
U Sezgin ◽  
L D Seneviratne ◽  
S W E Earles

Two obstacle avoidance criteria are developed, utilizing the kinematic redundancy of serial redundant manipulators having revolute joints and tracking pre-determined end effector paths. The first criterion is based on the instantaneous distances between certain selected points along the manipulator, called configuration control points (CCP), and the vertices of the obstacles. The optimized joint configurations are obtained by maximizing these distances. Thus, the links of the manipulator are configured away from the obstacles. The second criterion uses a different approach, and is based on Voronoi boundaries representing the equidistant paths between two obstacles. The optimized joint configurations are obtained by minimizing the distances between the CCP and control points selected on the Voronoi boundaries. The validities of the criteria are demonstrated through computer simulations.


1999 ◽  
Vol 82 (4) ◽  
pp. 883-892 ◽  
Author(s):  
Eleni Ioannou-Kakouri ◽  
Maria Aletrari ◽  
Eftychia Christou ◽  
Artemisia Hadjioannou-Ralli ◽  
Athena Koliou ◽  
...  

Abstract Aflatoxins (AFs) B1, B2, G1, and G2 in locally produced and imported foodstuffs (nuts, cereals, oily seeds, pulses, etc.) were monitored and controlled systematically and effectively from 1992–1996. Samples (peanuts, pistachios, etc.) with total AFs above the Cyprus maximum level (ML) of 10 μg/kg fluctuated between 0.7 and 6.9%. The results indicate the effectiveness of monitoring, as well as the need for constant surveillance and control, especially at critical control points (sites of import, primary storage, etc.), to prevent unfit products from entering the Cyprus market. The control included sampling, retainment, analysis, and destruction of foodstuff lots with AF levels above MLs. The highest incidence of aflatoxin contamination was observed in peanut butter (56.7%) and the highest level of AF B1 was found in peanuts (700 μg/kg). Levels of AF M1 in raw and pasteurized milk analyzed in 1993,1995, and 1996 were within both the Cyprus ML (0.5 μg/L) and the lower ML (0.05 μg/L) of some European countries. Only 12% of samples had detectable levels of AF M1. Analyses were performed by immunochemical methods. When recoveries were lower than 80%, the AF levels were corrected for recovery.


Sign in / Sign up

Export Citation Format

Share Document