scholarly journals Study of Calculation of Terrain Correction Using square pattern and sloped triangle Method in Karangsambung Area

2020 ◽  
Vol 17 (2) ◽  
pp. 9
Author(s):  
Rafi Salam ◽  
Eko Januari Wahyudi ◽  
Susanti Alawiyah

Conventional assessments of terrain correction are carried out by laying out transparent paper containing the Hammer chart on topographic maps, then estimating the elevation for each compartment. But this procedure has disadvantages, the number of compartments are too small for area with many topographic variations, and there is a subjectivity from the observer in estimating the compartments height. This research aim to overcome these problems and get more accurate terrain correction value. In this research, estimation of terrain correction carried out using square pattern and sloped triangle method. This method divides the area around the measurement point into a zone containing a square-shaped and triangle compartment. The research start with testing the program by using synthetic data to see the effect of rock bodies on terrain correction value. Then the program was applied to Karangsambung to see the topographic influence around Karangsambung on terrain correction. The program is then applied to gravity data, and the results are compared with calculations using the Hammer chart. Based on the synthetic data test, it was found that the value of terrain correction from a rock body measuring 10 x 10 km with a height difference of 1000 m from the station no longer significantly affects at the distance of 20 km. The topography around Karangsambung in the form of South Seraju Ranges with altitude of 1000 m at distance of 20 30 km gives effect of 0.05 mGal on terrain correction, while the Quaternary Volcano with an altitude of 3000 m at distance of 30 40 km gives effect of 0.1 mGal. The results of applying program at the gravity data show that the use of the square pattern method is able to correct errors from Hammer chart up to 3 mGal. The difference between the calculation of the two methods is getting bigger in the station located at slope area. It happens because estimation of the height difference in slope area is more difficult to do.

2021 ◽  
Vol 9 (5) ◽  
pp. 1153-1221
Author(s):  
Sean D. Willett ◽  
Frédéric Herman ◽  
Matthew Fox ◽  
Nadja Stalder ◽  
Todd A. Ehlers ◽  
...  

Abstract. Thermochronometry provides one of few methods to quantify rock exhumation rate and history, including potential changes in exhumation rate. Thermochronometric ages can resolve rates, accelerations, and complex histories by exploiting different closure temperatures and path lengths using data distributed in elevation. We investigate how the resolution of an exhumation history is determined by the distribution of ages and their closure temperatures through an error analysis of the exhumation history problem. We define the sources of error, defined in terms of resolution, model error and methodological bias in the inverse method used by Herman et al. (2013) which combines data with different closure temperatures and elevations. The error analysis provides a series of tests addressing the various types of bias, including addressing criticism that there is a tendency of thermochronometric data to produce a false inference of faster erosion rates towards the present day because of a spatial correlation bias. Tests based on synthetic data demonstrate that the inverse method used by Herman et al. (2013) has no methodological or model bias towards increasing erosion rates. We do find significant resolution errors with sparse data, but these errors are not systematic, tending rather to leave inferred erosion rates at or near a Bayesian prior. To explain the difference in conclusions between our analysis and that of other work, we examine other approaches and find that previously published model tests contained an error in the geotherm calculation, resulting in an incorrect age prediction. Our reanalysis and interpretation show that the original results of Herman et al. (2013) are correctly calculated and presented, with no evidence for a systematic bias.


Author(s):  
Shibnath Mukherjee ◽  
Aryya Gangopadhyay ◽  
Zhiyuan Chen

While data mining has been widely acclaimed as a technology that can bring potential benefits to organizations, such efforts may be negatively impacted by the possibility of discovering sensitive patterns, particularly in patient data. In this article the authors present an approach to identify the optimal set of transactions that, if sanitized, would result in hiding sensitive patterns while reducing the accidental hiding of legitimate patterns and the damage done to the database as much as possible. Their methodology allows the user to adjust their preference on the weights assigned to benefits in terms of the number of restrictive patterns hidden, cost in terms of the number of legitimate patterns hidden, and damage to the database in terms of the difference between marginal frequencies of items for the original and sanitized databases. Most approaches in solving the given problem found in literature are all-heuristic based without formal treatment for optimality. While in a few work, ILP has been used previously as a formal optimization approach, the novelty of this method is the extremely low cost-complexity model in contrast to the others. They implement our methodology in C and C++ and ran several experiments with synthetic data generated with the IBM synthetic data generator. The experiments show excellent results when compared to those in the literature.


2020 ◽  
Vol 50 (2) ◽  
pp. 223-247
Author(s):  
Jaime GARBANZO-LEÓN ◽  
Alonso VEGA FERNÁNDEZ ◽  
Mauricio VARELA SÁNCHEZ ◽  
Juan Picado SALVATIERRA ◽  
Robert W. KINGDON ◽  
...  

GNSS observations are a common solution for outdoor positioning around the world for coarse and precise applications. However, GNSS produces geodetic heights, which are not physically meaningful, limiting their functionality in many engineering applications. In Costa Rica, there is no regional model of the geoid, so geodetic heights (h) cannot be converted to physically meaningful orthometric heights (H). This paper describes the computation of a geoid model using the Stokes-Helmert approach developed by the University of New Brunswick. We combined available land, marine and satellite gravity data to accurately represent Earth's high frequency gravity field over Costa Rica. We chose the GOCO05s satellite-only global geopotential model as a reference field for our computation. With this combination of input data, we computed the 2020 Regional Stokes-Helmert Costa Rican Geoid (GCR-RSH-2020). To validate this model, we compared it with 4 global combined geopotential models (GCGM): EGM2008, Eigen6C-4, GECO and SGG-UM-1 finding an average difference of 5 cm. GECO and SGG-UM-1 are more similar to the GCR-RSH-2020 based on the statistics of the difference between models and the shape of the histogram of differences. The computed geoid also showed a shift of 7 cm when compared to the old Costa Rican height system but presented a slightly better fit with that system than the other models when looking at the residuals. In conclusion, GCR-RSH-2020 presents a consistent behaviour with the global models and the Costa Rican height systems. Also, the lowest variance suggests a more accurate determination when the bias is removed.


2021 ◽  
pp. 004051752110505
Author(s):  
Hao Yu ◽  
Christopher Hurren ◽  
Xin Liu ◽  
Xungai Wang

Softness is one of the key elements of textile comfort and is one of the main considerations when consumers make purchasing decisions. In the wool industry, softness can reflect the quality and value of wool fibers. There is verifiable difference in subjective softness between Australian Soft Rolling Skin (SRS) wool and conventional Merino (CM) wool, yet the key factors responsible for this difference are not yet well understood. Fiber attributes, such as crimp (curvature), scale morphology, ortho-to-cortex (OtC) ratio and moisture regain, may have a significant influence on softness performance. This study has examined these key factors for both SRS and CM wool and systematically compared the difference in these factors. There was no significant difference in the crimp frequency between these two wools; however, the curvature of SRS wool was lower than that of CM wool within the same fiber diameter ranges (below 14.5 micron, 16.5–18.5 micron). This difference might be caused by the lower OtC ratio for SRS wool (approximately 0.60) than for CM wool (approximately 0.66). The crystallinity of the two wools was similar and not affected by the change in OtC ratio. SRS wool has higher moisture regain than CM wool by approximately 2.5%, which could reduce the stiffness of wool fibers. The surface morphology for SRS wool was also different from that of CM wool. The lower cuticle scale height for SRS wool resulted in its smoother surface than CM wool. This cuticle height difference was present even when they both had similar cuticle scale frequency.


2018 ◽  
Vol 232 ◽  
pp. 04085
Author(s):  
Pengfei Han ◽  
Tenggang Xu ◽  
Jianjun Zhu

The design of a control system of the stator and rotor height difference detection device for air conditioning compressor motor based on PLC control, including hardware system, control system. XTG105 grating micrometer sensor is used to detect the difference between stator and rotor height. MITSUBISHI FX3U PLC controller is adopted as the system control core. The system parameter setting and operation monitoring are carried out through the WEINVIEW TK6070ip color display touch screen to ensure the normal operation of the system. The system controls the height difference detection device to automatically complete the detection, determination, marking and other processes. The working state is stable and the detection precision is high.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Ramzi Idoughi ◽  
Thomas H. G. Vidal ◽  
Pierre-Yves Foucher ◽  
Marc-André Gagnon ◽  
Xavier Briottet

Hyperspectral imaging in the long-wave infrared (LWIR) is a mean that is proving its worth in the characterization of gaseous effluent. Indeed the spectral and spatial resolution of acquisition instruments is steadily decreasing, making the gases characterization increasingly easy in the LWIR domain. The majority of literature algorithms exploit the plume contribution to the radiance corresponding to the difference of radiance between the plume-present and plume-absent pixels. Nevertheless, the off-plume radiance is unobservable using a single image. In this paper, we propose a new method to retrieve trace gas concentration from airborne infrared hyperspectral data. More particularly the outlined method improves the existing background radiance estimation approach to deal with heterogeneous scenes corresponding to industrial scenes. It consists in performing a classification of the scene and then applying a principal components analysis based method to estimate the background radiance on each cluster stemming from the classification. In order to determine the contribution of the classification to the background radiance estimation, we compared the two approaches on synthetic data and Telops Fourier Transform Spectrometer (FTS) Imaging Hyper-Cam LW airborne acquisition above ethylene release. We finally show ethylene retrieved concentration map and estimate flow rate of the ethylene release.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. G1-G21 ◽  
Author(s):  
William J. Titus ◽  
Sarah J. Titus ◽  
Joshua R. Davis

We apply a Bayesian Markov chain Monte Carlo formalism to the gravity inversion of a single localized 2D subsurface object. The object is modeled as a polygon described by five parameters: the number of vertices, a density contrast, a shape-limiting factor, and the width and depth of an encompassing container. We first constrain these parameters with an interactive forward model and explicit geologic information. Then, we generate an approximate probability distribution of polygons for a given set of parameter values. From these, we determine statistical distributions such as the variance between the observed and model fields, the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the subsurface object). We introduce replica exchange to mitigate trapping in local optima and to compute model probabilities and their uncertainties. We apply our techniques to synthetic data sets and a natural data set collected across the Rio Grande Gorge Bridge in New Mexico. On the basis of our examples, we find that the occupancy probability is useful in visualizing the results, giving a “hazy” cross section of the object. We also find that the role of the container is important in making predictions about the subsurface object.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


The theory of the application of gravity measurements to geodetic calculations is discussed, and the errors involved in calculating deflexions of the vertical are estimated. If the gravity data are given as free air anomalies from Jeffreys’s (1948) formula, so thdt the second and third harmonics of gravity are assumed known, the orders of magnitude of the standard deviations of the different sources of error are the following: Single deflexion: neglect of gravity outside 20° 1" Difference of deflexions: neglect of gravity outside 5° 0"·5 Calculation of effects of gravity from 0º·05 to 5° 0"·1 Calculation of effects of gravity within 0º·05 between 0"·1 and 0"·5 Estimates of the deflexions are made for Greenwich, Herstmonceux, Southampton and Bayeux, and the difference between Greenwich and Southampton is compared with the astronomical and geodetic amplitudes.


Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. I13-I22 ◽  
Author(s):  
Fernando J. Silva Dias ◽  
Valeria C. Barbosa ◽  
João B. Silva

We present a new semiautomatic gravity interpretation method for estimating a complex interface between two media containing density heterogeneities (referred to as interfering sources) that give rise to a complex and interfering gravity field. The method combines a robust fitting procedure and the constraint that the interface is very smooth near the interfering sources, whose approximate horizontal coordinates are defined by the user. The proposed method differs from the regional-residual separation techniques by using no spectral content assumption about the anomaly produced by the interface to be estimated, i.e., the interface can produce a gravity response containing both low- and high-wavenumber features. As a result, it may be applied to map the relief of a complex interface in a geologic setting containing either shallow or deep-seated interfering sources. Tests conducted with synthetic data show that the method can be of utility in estimating the basement relief of a sedimentary basin in the presence of salt layers and domes or in the presence of mafic intrusions in the basement or in both basement and the sedimentary section. The method was applied to real gravity data from two geologic settings having different kinds of interfering sources and interfaces to be interpreted: (1) the interface between the upper and lower crusts over the Bavali shear zone of southern India and (2) the anorthosite-tonalite interface over the East Bull Lake gabbro-anorthosite complex outcrop in Ontario, Canada.


Sign in / Sign up

Export Citation Format

Share Document