scholarly journals Positional Accuracy Assessment of Googleearth in Riyadh

2014 ◽  
Vol 49 (2) ◽  
pp. 101-106 ◽  
Author(s):  
Ashraf Farah ◽  
Dafer Algarni

ABSTRACT Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

2021 ◽  
Vol 10 (1) ◽  
pp. 37
Author(s):  
Goddu Pavan Sai Goud ◽  
Ashutosh Bhardwaj

The use of remote sensing for urban monitoring is a very reliable and cost-effective method for studying urban expansion in horizontal and vertical dimensions. The advantage of multi-temporal spatial data and high data accuracy is useful in mapping urban vertical aspects like the compactness of urban areas, population expansion, and urban surface geometry. This study makes use of the ‘Ice, cloud, and land elevation satellite-2′ (ICESat-2) ATL 03 photon data for building height estimation using a sample of 30 buildings in three experimental sites. A comparison of computed heights with the heights of the respective buildings from google image and google earth pro was done to assess the accuracy and the result of 2.04 m RMSE was obtained. Another popularly used method by planners and policymakers to map the vertical dimension of urban terrain is the Digital Elevation Model (DEM). An assessment of the openly available DEM products—TanDEM-X and Cartosat-1 has been done over Urban and Rural areas. TanDEM-X is a German earth observation satellite that uses InSAR (Synthetic Aperture Radar Interferometry) technique to acquire DEM while Cartosat-1 is an optical stereo acquisition satellite launched by the Indian Space Research Organization (ISRO) that uses photogrammetric techniques for DEM acquisition. Both the DEMs have been compared with ICESat-2 (ATL-08) Elevation data as the reference and the accuracy has been evaluated using Mean error (ME), Mean absolute error (MAE) and Root mean square error (RMSE). In the case of Greater Hyderabad Municipal Corporation (GHMC), RMSE values 5.29 m and 7.48 m were noted for TanDEM-X 90 and CartoDEM V3 R1 respectively. While the second site of Bellampalli Mandal rural area observed 5.15 and 5.48 RMSE values for the same respectively. Therefore, it was concluded that TanDEM-X has better accuracy as compared to the CartoDEM V3 R1.


Land ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 339 ◽  
Author(s):  
Sami Towsif Khan ◽  
Fernando Chapa ◽  
Jochen Hack

Green Stormwater Infrastructure (GSI), a sustainable engineering design approach for managing urban stormwater runoff, has long been recommended as an alternative to conventional conveyance-based stormwater management strategies to mitigate the adverse impact of sprawling urbanization. Hydrological and hydraulic simulations of small-scale GSI measures in densely urbanized micro watersheds require high-resolution spatial databases of urban land use, stormwater structures, and topography. This study presents a highly resolved Storm Water Management Model developed under considerable spatial data constraints. It evaluates the cumulative effect of the implementation of dispersed, retrofitted, small-scale GSI measures in a heavily urbanized micro watershed of Costa Rica. Our methodology includes a high-resolution digital elevation model based on Google Earth information, the accuracy of which was sufficient to determine flow patterns and slopes, as well as to approximate the underground stormwater structures. The model produced satisfactory results in event-based calibration and validation, which ensured the reliability of the data collection procedure. Simulating the implementation of GSI shows that dispersed, retrofitted, small-scale measures could significantly reduce impermeable surface runoff (peak runoff reduction up to 40%) during frequent, less intense storm events and delay peak surface runoff by 5–10 min. The presented approach can benefit stormwater practitioners and modelers conducting small scale hydrological simulation under spatial data constraint.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


2021 ◽  
Vol 10 (5) ◽  
pp. 289
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

The continuous development of machine learning procedures and the development of new ways of mapping based on the integration of spatial data from heterogeneous sources have resulted in the automation of many processes associated with cartographic production such as positional accuracy assessment (PAA). The automation of the PAA of spatial data is based on automated matching procedures between corresponding spatial objects (usually building polygons) from two geospatial databases (GDB), which in turn are related to the quantification of the similarity between these objects. Therefore, assessing the capabilities of these automated matching procedures is key to making automation a fully operational solution in PAA processes. The present study has been developed in response to the need to explore the scope of these capabilities by means of a comparison with human capabilities. Thus, using a genetic algorithm (GA) and a group of human experts, two experiments have been carried out: (i) to compare the similarity values between building polygons assigned by both and (ii) to compare the matching procedure developed in both cases. The results obtained showed that the GA—experts agreement was very high, with a mean agreement percentage of 93.3% (for the experiment 1) and 98.8% (for the experiment 2). These results confirm the capability of the machine-based procedures, and specifically of GAs, to carry out matching tasks.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


Author(s):  
R. Hung ◽  
B. A. King ◽  
W. Chen

Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.


Author(s):  
R. Hung ◽  
B. A. King ◽  
W. Chen

Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.


2018 ◽  
Vol 7 (4) ◽  
pp. 173
Author(s):  
Sarhat M. Adam ◽  
Abdulrahman F. Heeto

Google Earth imagery is frequently used in science, engineering, and other mapping applications. However, the company owning the tool announced that the data available in its geographical products is only approximate, so its accuracy is not officially documented. The Google Earth imagery in many areas around the world has been independently checked by scholars and third body parties. The estimated accuracies are found to largely vary depending on various factors but mainly due to, the imagery source or the image resolution. Positional accuracy testing methodology may also affect the assessment results. In processing, there should be many points around the tested area in order for the comparison to be more reliable. In this paper, the horizontal accuracy assessment was carried on the Google Earth imagery in Duhok city using the traces collected via GPS in Real Time Kinematic (RTK) technique. About 38 km of trajectory was collected for the two main roads in the selected area. Via semi-automated method, the points from RTK trajectory were compared to the corresponding extracted points from the centerline of the road network of Google Earth imagery. The nearest neighboring method through buildup algorithm was considered for comparison between both sets of data. Root Mean Square Error (RMSE) and maximum error were computed for horizontal positional coordinates and found to be 1.53m and 7.76m, respectively.


2021 ◽  
Vol 16 (2) ◽  
Author(s):  
Nur Adibah Mohidem ◽  
Malina Osman ◽  
Farrah Melissa Muharam ◽  
Saliza Mohd Elias ◽  
Rafiza Shaharudin ◽  
...  

In the last few decades, public health surveillance has increasingly applied statistical methods to analyze the spatial disease distributions. Nevertheless, contact tracing and follow up control measures for tuberculosis (TB) patients remain challenging because public health officers often lack the programming skills needed to utilize the software appropriately. This study aimed to develop a more user-friendly application by applying the CodeIgniter framework for server development, ArcGIS JavaScript for data display and a web application based on JavaScript and Hypertext Preprocessor to build the server’s interface, while a webGIS technology was used for mapping. The performance of this approach was tested based on 3325 TB cases and their sociodemographic data, such as age, gender, race, nationality, country of origin, educational level, employment status, health care worker status, income status, residency status, and smoking status between 1st January 2013 and 31st December 2017 in Gombak, Selangor, Malaysia. These data were collected from the Gombak District Health Office and Rawang Health Clinic. Latitude and longitude of the location for each case was geocoded by uploading spatial data using Google Earth and the main output was an interactive map displaying location of each case. Filters are available for the selection of the various sociodemographic factors of interest. The application developed should assist public health experts to utilize spatial data for the surveillance purposes comprehensively as well as for the drafting of regulations aimed at to reducing mortality and morbidity and thus minimizing the public health impact of the disease.


Sign in / Sign up

Export Citation Format

Share Document