scholarly journals CARTOMETRIC ANALYSIS OF THE ACCURACY OF PLAN OF LVIV IN 1878

2020 ◽  
Vol 11 (87) ◽  
Author(s):  
Mariana Yurkiv ◽  
◽  
Yuliia Holubinka ◽  
Andrii Hoba ◽  
◽  
...  

The article considers the topic about assessing the accuracy of the plan of Lviv in 1878, which was published by Artaria & Co in a separate sheet from the administrative map of the Austrian cartographer and engineer Karl Richter van Kummersberg. This cartographic work was compiled on the basis of the Second Military Topographic Survey conducted in the Austrian Empire during 1855-1863, and occupies an important place in the study of architectural and urban planning of Lviv in Austrian times, before the great construction changes of the XIX century. Analysis of the accuracy of the old plans of Lviv is an important aspect in the study of these works, which allows you to assess their geometric features and obtain valuable information about the methods of their creation and processing techniques. Thus, it makes it possible to compare the cartographic, documentary and semantic value of ancient plans. The accuracy assessment methodology is based on the transformation and geometric analysis of sets of identical points on the old plan and the reference. Sets of control points are used to bring two cartographic products into a common coordinate system. The Helmert transformation with four parameters is used for such transformation. Identical points should be distributed over the entire area, ideally evenly, so that the resulting transformation key has a global character. According to the transformation key, multiquadratic interpolation is performed to construct a continuous surface from discrete data. The results of the latter make it possible to graphically visualize the errors of the old plan in the form of displacement vectors, isolines of scale and rotation, which significantly speeds up and simplifies the study of the accuracy of the old plans. In addition, using the method of least squares a value that characterizes the positional accuracy of the ancient plan was obtained. All calculations and constructions were performed in the MapAnalyst software product. The presented technique can be used for similar research on other cartographic works, and the obtained numerical results and graphical visualizations - to compare old plans with each other.

2018 ◽  
Vol 7 (3.12) ◽  
pp. 474
Author(s):  
K S. R. Radhika ◽  
C V. Rao ◽  
V Kamakshi Prasad

Image acquisition in a wider swath, cannot assess the best spatial resolution (SR) and temporal resolution (TR) simultaneously, due to inherent limitations of space borne sensors. But any of the information extraction from remote sensed (RS) images demands the above characteristics. As this is not possible onboard, suitable ground processing techniques need to be evolved to realise the requirements through advanced image processing techniques. The proposed work deals with processing of two onboard sensor data viz., Resourcesat-1 (RS1): LISS-III, which has medium swath combined with AWiFS, which has wider swath data to provide high spatial and temporal resolution at the same instant. LISS-III at 23m and 24 days, AWiFS at 56m and 5 days spatial and temporal revisits acquire the data at different swaths. In the process of acquisition at the same time, the 140km swath of LISS-III coincides at the exact centre line 740km swath of AWiFS. If the non-overlapping area of AWiFS has same features of earth’s surface as of LISS-III overlapping area, it then provides a way to increase the SR of AWiFS to SR of LISS-III in the same non-overlapping area. Using this knowledge, a novel processing technique Fast One Pair Learning and Prediction (FOPLP) is developed in which time is optimized against the existing methods. FOPLP improves the SR of LISS-III in non-overlapping area using technique Single Image Super Resolution (SISR) with Non Sub sampled Contourlet Transforms (NSCT) method and is applied on different sets of images. The proposed technique resulting into an image having TR of 5 days, 740km swath at SR of 23m. Results have shown the strength of the proposed method in terms of computation time and prediction accuracy assessment.  


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Sakpod Tongleamnak ◽  
Masahiko Nagai

Performance of Global Navigation Satellite System (GNSS) positioning in urban environments is hindered by poor satellite availability because there are many man-made and natural objects in urban environments that obstruct satellite signals. To evaluate the availability of GNSS in cities, this paper presents a software simulation of GNSS availability in urban areas using a panoramic image dataset from Google Street View. Photogrammetric image processing techniques are applied to reconstruct fisheye sky view images and detect signal obstacles. Two comparisons of the results from the simulation and real world observation in Bangkok and Tokyo are also presented and discussed for accuracy assessment.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


2019 ◽  
Vol 9 (2) ◽  
pp. 178-185
Author(s):  
Raad A. Kattan ◽  
Farsat H. Abdulrahman

In this study, the geometric accuracy of four different maps for three sectors of Duhok city was assessed. The maps were produced in different periods and different techniques. One set of maps was paper plotted maps, which had to be geo-referenced. The other three maps were digitally plotted with reference to the global coordinate system UTM/WGS-84/Zone 38 N projection. A total of 51 points were identified on one reference map, which is the master plan of Duhok city prepared by the general directorate of urban planning/Kurdistan region/Iraq with the collaboration of the German company Ingenieurburo Vossing Company. The reference map, which is the master plan of Duhok governorate, is an official map that is certified and checked by the ministry of planning of the Kurdistan region to have a positional accuracy of ±1.5 cm. These points were searched for and identified on the other three maps. Discrepancies in Easting and Northings of these points were calculated, which resulted in the mean discrepancy of 2.29 m with a maximum value of 8.5 m in one event. The maximum standard deviation in dE and dN was 3.8 m. These values are reasonably accepted, considering that the maps were prepared using different techniques and a variable accuracy standard.


Geographies ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 143-165
Author(s):  
Jianyu Gu ◽  
Russell G. Congalton

Pixels, blocks (i.e., grouping of pixels), and polygons are the fundamental choices for use as assessment units for validating per-pixel image classification. Previous research conducted by the authors of this paper focused on the analysis of the impact of positional accuracy when using a single pixel for thematic accuracy assessment. The research described here provided a similar analysis, but the blocks of contiguous pixels were chosen as the assessment unit for thematic validation. The goal of this analysis was to assess the impact of positional errors on the thematic assessment. Factors including the size of a block, labeling threshold, landscape characteristics, spatial scale, and classification schemes were also considered. The results demonstrated that using blocks as an assessment unit reduced the thematic errors caused by positional errors to under 10% for most global land-cover mapping projects and most remote-sensing applications achieving a half-pixel registration. The larger the block size, the more the positional error was reduced. However, there are practical limitations to the size of the block. More classes in a classification scheme and higher heterogeneity increased the positional effect. The choice of labeling threshold depends on the spatial scale and landscape characteristics to balance the number of abandoned units and positional impact. This research suggests using the block of pixels as an assessment unit in the thematic accuracy assessment in future applications.


Author(s):  
Aleksandra Kostyleva

The primary task of this research is to elucidate the reasons for stereotyping “new” immigrants as dangerous criminals and anarchists in society of the United States. The subject of this research is criminality within the immigration environment, while the object is the immigrants from Southeast Europe and Asia who came to the United States in the second half of the XIX century and surpassed the immigrants from Western and Northern Europe. The author refers to the analysis of social and economic situation of “new” immigrants as the factors that impacted the rise of crime rate in the immigrant quarters. Special attention is given to organized criminal activity and radical political movements, as well as their influence upon the public image of “new” immigrants. The conclusion is made that the representatives of “new” immigration were involved in various unlawful actions, from minor administrative offenses and crimes against private property to murders, robberies and creation of organized criminal communities. An important place among the factors that affected criminalization of immigrants from Southeast Europe and Asia was held by social isolation of immigrant communities, problematic assimilation, and tough economic situation due to intense competition on the job market and high unemployment rate. At the same time, “new” immigrants were no different from the local dwellers in disposition to commit crime: criminal rate among immigrants did not exceed average in the country.


2017 ◽  
Vol 66 (2) ◽  
pp. 347-364
Author(s):  
Janina Zaczek-Peplinska ◽  
Maria Kowalska

Abstract The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan’s density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam’s incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5423
Author(s):  
José A. Moreno-Ruiz ◽  
José R. García-Lázaro ◽  
Manuel Arbelo ◽  
Manuel Cantón-Garbín

This paper presents an accuracy assessment of the main global scale Burned Area (BA) products, derived from daily images of the Moderate-Resolution Imaging Spectroradiometer (MODIS) Fire_CCI 5.1 and MCD64A1 C6, as well as the previous versions of both products (Fire_CCI 4.1 and MCD45A1 C5). The exercise was conducted on the boreal region of Alaska during the period 2000–2017. All the BA polygons registered by the Alaska Fire Service were used as reference data. Both new versions doubled the annual BA estimate compared to the previous versions (66% for Fire_CCI 5.1 versus 35% for v4.1, and 63% for MCD64A1 C6 versus 28% for C5), reducing the omission error (OE) by almost one half (39% versus 67% for Fire_CCI and 48% versus 74% for MCD) and slightly increasing the commission error (CE) (7.5% versus 7% for Fire_CCI and 18% versus 7% for MCD). The Fire_CCI 5.1 product (CE = 7.5%, OE = 39%) presented the best results in terms of positional accuracy with respect to MCD64A1 C6 (CE = 18%, OE = 48%). These results suggest that Fire_CCI 5.1 could be suitable for those users who employ BA standard products in geoinformatics analysis techniques for wildfire management, especially in Boreal regions. The Pareto boundary analysis, performed on an annual basis, showed that there is still a potential theoretical capacity to improve the MODIS sensor-based BA algorithms.


2021 ◽  
Author(s):  
Yuan-Yuan (Annie) Chang ◽  
Konrad Bogner ◽  
Massimiliano Zappa ◽  
Daniela I.V. Domeisen ◽  
Christian M. Grams

<p>Across the globe, there has been an increasing interest in improving the predictability of weekly to monthly (sub-seasonal) hydro-meteorological forecasts as they play a valuable role in medium- to long-term planning in many sectors such as agriculture, navigation, hydro-power production, and hazard warnings. A Precipitation-Runoff-Evapotranspiration HRU model (PREVAH) has been previously set up with raw metrological forcing of 51 ensemble members and 32 days lead time taken from the operational European Centre for Medium-Range Weather Forecasts (ECMWF) extended-range forecast. The PREVAH model is used to generate hydrological forecasts for the study area, which consists of 300 catchments covering approximately the entire area of Switzerland. The primary goal of this study is to improve the quality of the categorical forecast of weekly mean total discharge in a catchment laying in the lower, normal, or upper tercile of the climatological distribution at a monthly horizon. Therefore, we explore <span>the approach to post-process PREVAH outputs using machine learning algorithm Gaussian process</span>. Weather regime (WR) data, based on 500 hPa geopotential height in the Atlantic-European region are used as an added feature to further enhance the post-processing performance.</p><p>By comparing the overall accuracy and the ranked probability skill score of the post-processed forecasts with the ones of raw forecasts we show that the proposed post-processing techniques are able to improve the forecast skill. The degree of improvement varies by catchment, lead time and variable. The benefit of the added WR data is not consistent across the study area but most promising in high altitude catchments with steep slopes. Among the seven types of WRs, the majority of the corrections are observed when either a European blocking or a Scandinavian blocking is forecasted as the dominant weather regime. By applying a “best practice” to each individual catchment, that is the processing technique with the highest accuracy among the different proposed techniques, a median accuracy of 0.65 (improved from a value of 0.53 with no processing technique) can be achieved at 4-week lead time. Due to the small data size, the conclusions should be considered preliminary, but this study highlights the potential of improving the skill of sub-seasonal hydro-meteorological forecasts utilizing weather regime data and machine learning in a real-time deployable setup.</p>


Sign in / Sign up

Export Citation Format

Share Document