scholarly journals Studi Akurasi Pengukuran GNSS Jaring Makro Tahun 2016 dan 2017 pada Pemantauan Bendungan Sermo

2017 ◽  
Vol 1 (1) ◽  
pp. 50 ◽  
Author(s):  
Muhammad Iqbal Taftazani ◽  
Yulaikhah Yulaikhah

Monitoring of deformation on Sermo Dam has been widely practiced. One of them is by installing the monitoring point, locate dinside the Sermo Dam area called the micro network, and outside the dam called macro network. The installation of the micro network monitoring point aims to monitor the deformation of the dam due to the volume of water. The macro network monitoring point aims to monitor the effect of the existence of an active fault under the dam. In the last few years, monitoring in Sermo Dam has been done using GNSS technology. This paper intends to present the results of the accuracy represented by the deviation standard value of the measurement point at the macro net on the GNSS observation in 2016 and 2017. The objective is to compare the accuracy resulting from various GNSS processing strategies in observation 2016 and 2017, evaluation of GNSS measurements that can be used as guidance in subsequent GNSS measurements. The result shows that GNSS measurement in 2017 using two IGS reference points (BAKO and COCO) has a better standard deviation value compared to the 2016 measurement by the difference 1-5 mm on the X axis, 1-9 mm on the Y axis, and 1-2 mm on the Z axis. In the GNSS data processing using seven IGS reference points (BAKO, COCO, KARR, DARW, GUUG, PIMO, SHAO) in 2016 mostly has a better standard deviation compared to 2017 measurement except in MAK5 with the difference 0-4 mm on the X axis, 1-10 mm on Y axis, and 0-2 mm on Z axis. As for the value of coordinate data processing in 2016 and 2017on the two processing strategies there are differences in coordinate values that indicate the movement of monitoring points of macro network. However, the vector of the point movement that occurs in the two strategies has a different direction. This requires verification in-depth research and focused on the deformation of the Sermo Dam monitoring point.

Author(s):  
Rochmad Muryamto ◽  
Muhammad Iqbal Taftazani ◽  
Yulaikhah Yulaikhah ◽  
Bambang Kun Cahyono ◽  
Anindya Sricandra Prasidya

Since 1991, Prambanan Temple has been recognized by UNESCO as a cultural heritage of a historic building. In its construction, the Prambanan temple was established in a labile soil structure in the sandy soil and not far from the Opak River. In the geological map of Yogyakarta, there is a fault under the Opak River landscape. This fault under the Opak River has caused an earthquake in 2006. Because of its position in disaster-prone areas, regular monitoring of the geometric aspects of Prambanan Temple is very necessary.This research aims to build a deformation monitoring control point in Prambanan Temple. Eight control points, consist of three existing points and five new points are built around Prambanan Temple. These eight control points then were measured by observing GNSS for 1x24 hours in order to define their coordinates. GNSS data processing is done using GAMIT 10.70 software with two strategies, namely (1) processing with regional binding points, in this case using IGS BAKO and JOG2 stations, and (2) processing with global binding points using IGS COCO station reference points, DARW, KARR, POHN, PIMO, DGAR, and IISC. This research yields the establishment of Prambanan temple deformation control points and their coordinates and standard deviation in two processing strategies. The smallest standard deviation in the first strategy is 0.0787 m on the Z-axis for points of PRO1 and PR03. The biggest standard deviation is 0.1218 m on the Y-axis at point of PR02. In the second strategy the smallest standard deviation is 0.0036 m on the Z-axis for points of PR01 and PR03. The biggest standard is 0.0141 m on the Y-axis at point of PR02.


2016 ◽  
Vol 22 (3) ◽  
pp. 437-452
Author(s):  
Paulo Cesar Lima Segantine ◽  
Irineu da Silva ◽  
Mélodie Kern Sarubo Dorth Domingues

The correct data processing of GNSS measurements, as well as a correct interpretation of the results are fundamental factors for analysis of quality of land surveying works. In that sense, it is important to keep in mind that, although, the statistical data provided by the majority of commercials software used for GNSS data processing, describes the credibility of the work, they do not have consistent information about the reliability of the processed coordinates. Based on that assumption, this paper proposes a classification table to classify the reliability of baselines obtained through GNSS data processing. As data input, the GNSS measurements were performed during the years 2006 and 2008, considering different seasons of the year, geometric configurations of RBMC stations and baseline lengths. As demonstrated in this paper, parameters as baseline length, ambiguity solution, PDOP value and the precision of horizontal and vertical values of coordinates can be used as reliability parameters. The proposed classification guarantees the requirements of the Brazilian Law N( 10.267/2001 of the National Institute of Colonization and Agrarian Reform (INCRA)


Author(s):  
Muhammad Arsyad Fauzi ◽  
Leni Sophia Heliani

The study of deformation monitoring point movement of Sangihe Islands was conducted using the GNSS measurement methods. One of the factor that determines the accuracy of the deformation monitoring is the utilized data processing methods. Therefore, this research analyze the comparison of deformation monitoring point movement of Sangihe Islands using periodic and simultaneous GNSS data processing methods. This research used three observations epochs of GNSS, i.e. 2014, 2015 and 2016. The observational data were processed using GAMIT/GLOBK software that tied to ITRF 2014 to produce coordinates and their accuracy. Based on the coordinate data and its accuracy, the velocity of movements calculation and their accuracy was done using the periodic and simultaneous methods. Based on the periodic method, the velocity of the SGH1 point on the N component is -1,11 ± 2,72 mm/year, on the E component is 9,21 ± 4,17 mm/year, and on the U component is -15,02 ± 50,64 mm/year, while the velocity of the SGH3 point on the N component is -4,93 ± 1,56 mm/year, on the E component is 16,50 ± 2,47 mm/year, and on the U component is -6,69 ± 19,42 mm/year. Based on the simultaneous method, the velocity of the SGH1 point on the N component is -1,56 ± 1,25 mm/year, on the E component is 9,40 ± 1,55 mm/year, and on the U component is -11,54 ± 5,83 mm/year, while the velocity of the SGH3 point on the N component is -5,18 ± 0,88 mm/year, on the E component is 16,91 ± 1,10 mm/year, and on the U component is -2,84 ± 3,49 mm/year. This research proves the hypothesis that the simultaneous GNSS data calculation results in higher accuracy than the periodic method.


Author(s):  
Riccardo Barzaghi ◽  
Noemi Emanuela Cazzaniga ◽  
Carlo Iapige De Gaetani ◽  
Livio Pinto ◽  
Vincenza Tornatore

GNSS receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure deformations. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS estimated deformations on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in the Sardinia Island. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. This encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.


2019 ◽  
Vol 110 ◽  
pp. 01030
Author(s):  
Boštjan Kovačič

Levelling is one of the most important geodesic works in construction and other interventions in space. Mostly, it is used for the needs of the altitude presentations of the terrain, to determine shifts, to determine the height of the object, and for various precise laboratory and scientific researches. When determining the shifts, the values are confirmed with the results of levelling, and deformations are calculated. There are a lot of methods to determine height differences; which one will be used, depends on the complexity of the works. Recently, the GNSS method is mostly used. It offers 2D positionally reliable results, whereas the vertical component does not provide reliable results. For this purpose, a series of tests and GNSS-measurements analysis were performed at our institution, which is also presented in the article as the GNSS-measurements analysis in comparison to the results obtained with the robotic total station of accuracy 0,5ˇ. Prior to the experiment, a temporal GNSS data analysis based on an individual axis and with a different way of data processing was carried out. The planning of GNSS-measurements for the needs of more demanding measurements is emphasised. To improve the determination of the vertical component, the data capture with GNSS method was increased from 10 Hz to 100 Hz, which partly improved the final values and is presented in the study.


2020 ◽  
Vol 6 (2) ◽  
pp. 187-197
Author(s):  
Nurlaila Suci Rahayu Rais ◽  
Dedeh Apriyani ◽  
Gito Gardjito

Monitoring of warehouse inventory data processing is an important thing for companies. PT Talaga mulya indah is still manual using paper media, causing problems that have an effect on existing information, namely: problems with data processing of incoming and outgoing goods. And the difference between data on the amount of stock of goods available with physical data, often occurs inputting data more than once for the same item, searching for available data, and making reports so that it impedes companies in monitoring inventory of existing stock of goods. Which aims to create a system that can provide updated information to facilitate the warehouse admin in making inventory reports, and reduce errors in input by means of integrated control. In this study, the authors used the data collection method used in this analysis using the method of observation, interviews, and literature review (literature study). For analysis using the PIECES analysis method. Furthermore, the system design used is UML (Unified Modeling Language). The results of this study are expected to produce the right data in the process of monitoring inventory data processing, also can provide the right information and make it easier to control the overall availability of goods.


2021 ◽  
Vol 11 (3) ◽  
pp. 1115
Author(s):  
Aleš Bezděk ◽  
Jakub Kostelecký ◽  
Josef Sebera ◽  
Thomas Hitziger

Over the last two decades, a small group of researchers repeatedly crossed the Greenland interior skiing along a 700-km long route from east to west, acquiring precise GNSS measurements at exactly the same locations. Four such elevation profiles of the ice sheet measured in 2002, 2006, 2010 and 2015 were differenced and used to analyze the surface elevation change. Our goal is to compare such locally measured GNSS data with independent satellite observations. First, we show an agreement in the rate of elevation change between the GNSS data and satellite radar altimetry (ERS, Envisat, CryoSat-2). Both datasets agree well (2002–2015), and both correctly display local features such as an elevation increase in the central part of the ice sheet and a sharp gradual decline in the surface heights above Jakobshavn Glacier. Second, we processed satellite gravimetry data (GRACE) in order for them to be comparable with local GNSS measurements. The agreement is demonstrated by a time series at one of the measurement sites. Finally, we provide our own satellite gravimetry (GRACE, GRACE-FO, Swarm) estimate of the Greenland mass balance: first a mild decrease (2002–2007: −210 ± 29 Gt/yr), then an accelerated mass loss (2007–2012: −335 ± 29 Gt/yr), which was noticeably reduced afterwards (2012–2017: −178 ± 72 Gt/yr), and nowadays it seems to increase again (2018–2019: −278 ± 67 Gt/yr).


Foods ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 1290
Author(s):  
Marthe Jordbrekk Blikra ◽  
Xinxin Wang ◽  
Philip James ◽  
Dagbjørn Skipnes

There is an increasing interest in the use of Saccharina latissima (sugar kelp) as food, but the high iodine content in raw sugar kelp limits the daily recommended intake to relatively low levels. Processing strategies for iodine reduction are therefore needed. Boiling may reduce the iodine content effectively, but not predictably, since reductions from 38–94% have been reported. Thus, more information on which factors affect the reduction of iodine are needed. In this paper, sugar kelp cultivated at different depths were rinsed and boiled, to assess the effect of cultivation depth on the removal efficacy of potentially toxic elements (PTEs), especially iodine, cadmium, and arsenic, during processing. Raw kelp cultivated at 9 m contained significantly more iodine than kelp cultivated at 1 m, but the difference disappeared after processing. Furthermore, the content of cadmium and arsenic was not significantly affected by cultivation depth. The average reduction during rinsing and boiling was 85% for iodine and 43% for arsenic, but no significant amount of cadmium, lead, or mercury was removed. Cultivation depths determined the relative effect of processing on the iodine content, with a higher reduction for kelp cultivated at 9 m (87%) compared to 1 m (82%). When not taken into consideration, cultivation depth could mask small reductions in iodine content during rinsing or washing. Furthermore, since the final content of PTEs was not dependent on the cultivation depth, the type and extent of processing determines whether cultivation depth should be considered as a factor in cultivation infrastructure design and implementation, or alternatively, in product segmentation.


2019 ◽  
Vol 12 (2) ◽  
pp. 69-82
Author(s):  
Sravani Bharandev ◽  
Sapar Narayan Rao

Purpose The purpose of this paper is to test the disposition effect at market level and propose an appropriate reference point for testing disposition at market level. Design/methodology/approach This is an empirical study conducted on 500 index stocks of NSE500 (National Stock Exchange). Winning and losing days for each stock are calculated using 52-week high and low prices as reference points. To test disposition effect, abnormal trading volumes of stocks are regressed on their percentage of winning (losing) days. Further using ANOVA, the difference between mean of percentage of winning (losing) days of high abnormal trading volume deciles and low abnormal trading volume deciles is tested. Findings Results show that a stock’s abnormal trading volume is positively influenced by the percentage of winning days whereas percentage of losing days show no such effect. Findings are consistent even after controlling for volatility and liquidity. ANOVA results show the presence of high percentage of winning days in higher deciles of abnormal trading volumes and no such pattern in case of losing days confirms the presence of disposition effect. Further an ex post analysis indicates that disposition prone investors accumulate losses. Originality/value This is the first study, which proposes the use of 52-week high and low prices as reference points to test the market-level disposition effect. Findings of this study enhance the limited literature available on disposition effect in emerging markets by providing evidence from Indian stock markets.


1968 ◽  
Vol 58 (3) ◽  
pp. 977-991
Author(s):  
Richard A. Haubrich

abstract Arrays of detectors placed at discrete points are often used in problems requiring high resolution in wave number for a limited number of detectors. The resolution performance of an array depends on the positions of detectors as well as the data processing of the array output. The performance can be expressed in terms of the “spectrum window”. Spectrum windows may be designed by a general least-square fit procedure. An alternate approach is to design the array to obtain the largest uniformly spaced coarray, the set of points which includes all the difference spacings of the array. Some designs obtained from the two methods are given and compared.


Sign in / Sign up

Export Citation Format

Share Document