scholarly journals ANALYSIS OF GOOGLE EARTH ALTITUDE ERRORS FOR USE IN GEODESIC WORKS

2021 ◽  
Vol 3 (163) ◽  
pp. 47-51
Author(s):  
I. Musiienko ◽  
L. Kazachenko ◽  
E. Zaharova

The Google Earth service is an information system with extensive functionality used in the Internet, for mobile devices and for desktop computers. The system is a "virtual globe" built on pooled photographs with the addition of spatial information provided by Alphabet Inc in the coordinate system - WGS 84 and the universal cross-section of Mercator. In the system there is an opportunity to lay a line of a route, to receive a longitudinal profile of this line with marks and a slope. However, the question of the accuracy of spatial information remains. The answer to this question will highlight a range of engineering, geodetic and design tasks that can be solved with this service. The article considers this problem from the analysis of height errors. The accuracy of Google Earth's spatial information can be assessed by comparing it to a geodetic reference object. As such object in this work the data of adjustment of the design documentation for construction of the highway bypassing Novy Bug (the second turn) in the Nikolaev area are taken. In the first stage, a "reference" object has been considered, for which there are spatial data obtained by geodetic methods of a given accuracy. In the second stage, the Google Earth system built a road route and a longitudinal profile. At the third stage the received information was systematized and analyzed. In this work, the accuracy of construction of the longitudinal profile by geodetic methods was reduced due to the construction of a black line of the longitudinal profile through a digital terrain model, and hence due to the standard errors of Delaunay triangulation. When using geometric leveling data, the compatibility of the two longitudinal profiles will increase. With careful preparation of the original data, you can achieve meter accuracy in height. Representation of the Earth's surface with such accuracy can be used in solving many engineering problems: variant design of linear structures, preliminary feasibility study of design solutions and more. In the future, we must to assess the horizontal errors.

2021 ◽  
Vol 93 ◽  
pp. 05007
Author(s):  
Elena Sazonova ◽  
Veronica Borisova ◽  
Sergey Terentyev ◽  
Olga Kramlikh ◽  
Irina Sidorenkova

One of the topical trends in the development of agriculture in the Russian Federation is digitalization and automation of methods for processing spatial information about various land resources. The main element of the implementation of this direction in practice can be considered a three-dimensional digital terrain model. This model allows solving many problems in the field of land management, in particular, such as analyzing the surface of the terrain in order to determine its suitability for agricultural production. Despite a number of existing problems in this area, an automated digital land management system will enable public authorities to implement an integrated and systematic approach to management, that is, to more efficiently use the land resources, influence the land market, as well as attract the investments and create the necessary conditions. for sustainable development of the territory.


Author(s):  
Dimitris Kaimaris ◽  
Petros Patias ◽  
Olga Georgoula

The interpretation of photos and the processing of Google Earth imagery which allowed the “random” discovery, as a result of a non-systematical research, of a numerous marks of buried constructions in the wide area of the city of Larisa (Thessaly, Greece) is presented in this project. Additional data as aerial photographs over time, satellite images and the digital terrain model of the same area has been used. From the numerous marks, this project mainly focuses on three positions where the positive marks (soilmarks or/and cropmarks), circular or/and linear, reveal on a satisfying level covered construction of great dimensions. The ongoing research activity of the editorial team along with this research highlights the advantages of using Google Earth imagery in an attempt to “random” mark of unknown covered constructions, or, in the frame of a systematic survey of aerial and remote sensing archaeology, as additional and not exclusive source of information.


Author(s):  
J.-S. Lai ◽  
F. Tsai ◽  
S.-H. Chiang

This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.


2017 ◽  
Vol 21 (4) ◽  
pp. 197-204
Author(s):  
Maciej Góraj ◽  
Marcin Kucharski ◽  
Krzysztof Karsznia ◽  
Izabela Karsznia ◽  
Jarosław Chormański

AbstractThe main objective of this study is to evaluate the changes in the hydrographic network of Słowiński National Park. The authors analysed the changes occurring in the drainage network due to limited maintenance in this legally protected natural area. To accomplish this task, elaborations prepared on the basis of aerial photographs were used: an orthophoto map from 1996, hyperspectral imaging from June 2015, and a digital terrain model based on airborne laser scanning (ALS) from June 2015. These spatial data resources enabled the digitisation of the water courses for which selected hydro-morphological features had been defined. As a result of analysing the differences of these features, a quality map was elaborated which was then subjected to interpretation, and the identified changes were quantified in detail.


Author(s):  
В.К. Каличкин ◽  
Р.А. Корякин ◽  
К.Ю. Максимович ◽  
Р.Р. Галимов ◽  
Н.А. Чернецкая

Рассмотрен процесс создания последовательностей при описании предметных областей на формально-логическом языке UML. Использование последовательностей основано на понятии «источник данных», введённом авторами на основе предыдущего этапа концептуализации предметной области «агроэкологические свойства земель» – диаграммы классов. В классе начала связи выбирается один из комплектов атрибутов, в классе конца связи – один из методов (запрос), соответствующий этому комплекту. Многократно применяя этот подход при различных значениях атрибутов центрального класса, получается массив данных (в том числе пространственных). Атрибуты являются связующим звеном между создаваемой моделью, методами, потоками данных и запросов системы, так как, с одной стороны, они входят в состав классов, участвующих в сценариях диаграмм последовательностей, а с другой – принадлежат к внешней оболочке модели. На примерах движения информации, необходимой для расчетов гидротермического коэффициента Селянинова и степени проявления эрозии для рабочего участка, построены диаграммы последовательностей «ГидротермическийКоэффициент» и «СтепеньПроявленияЭрозии». Данные для диаграмм последовательностей формируются с помощью геоинформационных систем (географические координаты рабочего участка, цифровая модель рельефа) и справочно-информационного портала «Погода и климат». Предлагаемый подход даёт возможность автоматического построения баз знаний на основе двух концептуальных понятий: «источники данных» и «последовательности». Структурирование и формализация знаний позволяет осуществить переход от набора информации к знаниям и последующему их графическому отображению. Визуализация помогает наглядно отобразить связи между классами, которые могут быть не очевидны. Становится доступной возможность последующей оценки жизнеспособности модели, ее проектирования в симбиозе с использованием инструментов для имитационного моделирования, а также математических методов анализа и обработки информации. Данные диаграммы используются для построения и верификации созданных подсистем в процессе прямого и обратного проектирования аграрной интеллектуальной системы. The process of creating sequences while describing subdicipline in the formal-logical language UML is considered. The sequences usage is based on the concept of a "data source". It was deduced by the authors on the basis of the previous step of subdicipline conceptualization «agroecological lands properties» - class diagrams. In the beginning link's class, one of the attribute set is selected, in the ending class - one of the adequate to this set methods (query). The result of repeated application this approach, with different values of the attributes of the central class, is a database (including spatial data). Attributes mediate the created model, methods, data streams and system requests, as, on the one hand, they are among the classes involved in sequence diagrams scripting, and on the other - belong to the outer shell of the model. Sequences diagrams were constructed by the examples of the information flow necessary for calculating the Selyaninov hydrothermal index and the degree of erosion for the working land area. These diagrams are "HydrothermalIndexQuery" and "ErosionDegreeQuery". Data for sequence diagrams is generated by Geological Information System (geographic coordinates of the working land area, digital terrain model) and the reference-information gateway “Weather and Climate". The proposed approach makes it possible to build knowledge bases with the scope of two concepts: "data sources" and "sequence" automatically. Knowledge structuralizasion and formalization allows produce a shift from collecting information to knowledge and its subsequent graphical image. Visualization helps to demonstrably provide insight into classes' connections that may occur not to be obvious. The possibility of subsequent estimate of model consistency, its creation process using simulation modeling tools, as well as mathematical analysis methods and processing of data becomes more accessible. Diagrams' data is used for sybsystem construction and verification. These parts of a whole system were created in the process of forward and reverse engineering agricultural intelligence system.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 433 ◽  
Author(s):  
Maleika Wojciech

The paper presents an optimized method of digital terrain model (DTM) estimation based on modified kriging interpolation. Many methods are used for digital terrain model creation; the most popular methods are: inverse distance weighing, nearest neighbour, moving average, and kriging. The latter is often considered to be one of the best methods for interpolation of non-uniform spatial data, but the good results with respect to model’s accuracy come at the price of very long computational time. In this study, the optimization of the kriging method was performed for the purpose of seabed DTM creation based on millions of measurement points obtained from a multibeam echosounder device (MBES). The purpose of the optimization was to significantly decrease computation time, while maintaining the highest possible accuracy of created model. Several variants of kriging method were analysed (depending on search radius, minimum of required points, fixed number of points, and used smoothing method). The analysis resulted in a proposed optimization of the kriging method, utilizing a new technique of neighbouring points selection throughout the interpolation process (named “growing radius”). Experimental results proved the new kriging method to have significant advantages when applied to DTM estimation.


Author(s):  
Dimitris Kaimaris ◽  
Petros Patias ◽  
Olga Georgoula

The interpretation of photos and the processing of Google Earth imagery which allowed the “random” discovery, as a result of a non-systematical research, of a numerous marks of buried constructions in the wide area of the city of Larisa (Thessaly, Greece) is presented in this project. Additional data as aerial photographs over time, satellite images and the digital terrain model of the same area has been used. From the numerous marks, this project mainly focuses on three positions where the positive marks (soilmarks or/and cropmarks), circular or/and linear, reveal on a satisfying level covered construction of great dimensions. The ongoing research activity of the editorial team along with this research highlights the advantages of using Google Earth imagery in an attempt to “random” mark of unknown covered constructions, or, in the frame of a systematic survey of aerial and remote sensing archaeology, as additional and not exclusive source of information.


2014 ◽  
Vol 1 (1) ◽  
pp. 52-69
Author(s):  
S.O. Ogedegbe

This study examines the effectiveness and accuracy of SPOT-5 and ASTER LiDAR data satellite images, Global Pos1t1on1ng System (GPS), Digital Terrain Model (DTM), and Geographic Information System (GIS) in carrying out a revision of Nigerian topographic maps at the scale of 1:50,000. The data for the study were collected by extraction of relevant spatial data from the 1964 topographic map, delineation and interpretation of 2009 SPOT-5 data, and field surveys. The landscape changes extracted from SPOT- 5 were used to update the topographic base map and to determine the nature and direction of changes that have taken place in the study area. The findings revealed that changes have occurred in both cultural and relief features over time. The coefficient of correlation and t-test was calculated to show that changes in point, linear and areal features are significant. Also significant were the planh11etric and height accuracies of the revised map. The study shows that satellite data especially SPOT-5 is useful for the revision of topographic maps at scales of 1:50,000 and even larger. And, high-resolution remote sensing at Sm and ASTER data (30m) with GPS (±1.9m) can be used to c.reate a digital elevation model (DEM) on the map which is an essential dataset for complete revision. Cette étude examine l'efficacité et la précision des images satellites de données SPOT-5 et ASTER LiDAR, du système de positionnement global (GPS), du modèle numérique de terrain (MNT) et du système d'information géographique (SIG) pour effectuer une révision des cartes topographiques nigérianes au échelle de 1:50 000. Les données de l'étude ont été recueillies par extraction de données spatiales pertinentes à partir de la carte topographique de 1964, délimitation et interprétation des données SPOT-5 de 2009 et relevés de terrain. Les changements de paysage extraits de SPOT-5 ont été utilisés pour mettre à jour le fond de carte topographique et pour déterminer la nature et la direction des changements qui ont eu lieu dans la zone d'étude. Les résultats ont révélé que des changements se sont produits dans les caractéristiques culturelles et du relief au fil du temps. Le coefficient de corrélation et le test t ont été calculés pour montrer que les changements dans les caractéristiques ponctuelles, linéaires et aréales sont significatifs. Les précisions planimétriques et altimétriques de la carte révisée étaient également importantes. L'étude montre que les données satellitaires, en particulier SPOT-5, sont utiles pour la révision des cartes topographiques à des échelles de 1:50 000 et même plus. De plus, la télédétection haute résolution aux données Sm et ASTER (30 m) avec GPS (± 1,9 m) peut être utilisée pour créer un modèle d'élévation numérique (DEM) sur la carte qui est un ensemble de données essentiel pour une révision complète.


2021 ◽  
Vol 13 (14) ◽  
pp. 7969
Author(s):  
Grzegorz Budzik ◽  
Piotr Krajewski

In an era of significant growth in the availability of spatial data and continued advances in computing technologies, opportunities for new interpretations and solutions to the landscape research problems posed worldwide are emerging. This paper presents different possibilities of applying digital terrain model (DTM) data in research of various aspects of landscape. For this purpose, two different methods were proposed. The first was to identify a set of components of the Jelenia Góra city landscape character on the basis of the topographic position index and spatial distribution of land cover, while the second was to assess the landscape of Jelenia Góra city in terms of the possibility of adopting new elements, using the author’s scenic absorptivity method. The results indicate the structure of the components of the landscape character of Jelenia Góra city together with its spatial distribution, which also allowed for the delineation of landscape units. The scenic absorptivity analysis showed that there are isolated areas within Jelenia Góra city that are capable of accommodating significant size elements that would not adversely affect the city landscape. In conclusion, DTM data are able to significantly improve research methods in landscape studies.


Author(s):  
A. Peled

Abstract. There are basically two levels of calibrations and validation of digitally acquired spectral and other information via sensors carried on space-borne or airborne platforms. The basic level is carried out by the data producers executed by comparison made of results taken over test fields for example. The second level, more a part of a supervised classification effort are carried by the data users and value added spatial information users or providers to edge users. The latter is quite typical for supervised classification protocols. This is either for establishing libraries of spectral signatures for each relevant class-type or for ad-hoc classification where no previous information or specific knowledge wee kept. Such methods indicate and support even strongly the need of the basic Cal/Val step of the sensors made by the original data providers. The paper is reviewing the method of database-driven concept that allows for automatic recognition of detected features within the digital spatial 2-D (yet) realm to its identification within the digital 2.5D spatial vector information within existing large Big-data national core spatial data bases to be updated. These Large data bases are Big enough to operate the resourceful Munchhausen method of self-pulling information out of the huge abandon of data resources.


Sign in / Sign up

Export Citation Format

Share Document