scholarly journals Accuracy Assessment of Large-Scale Topographic Feature Extraction Using High Resolution Raster Image and Artificial Intelligence Method

2021 ◽  
Vol 767 (1) ◽  
pp. 012045
Author(s):  
O Marena ◽  
M S N Fitri ◽  
O A Hisam ◽  
A Kamil ◽  
Z M Latif
2021 ◽  
Author(s):  
Andrew Griffin ◽  
Sean Griffin ◽  
Kristofer Lasko ◽  
Megan Maloney ◽  
S. Blundell ◽  
...  

Feature extraction algorithms are routinely leveraged to extract building footprints and road networks into vector format. When used in conjunction with high resolution remotely sensed imagery, machine learning enables the automation of such feature extraction workflows. However, many of the feature extraction algorithms currently available have not been thoroughly evaluated in a scientific manner within complex terrain such as the cities of developing countries. This report details the performance of three automated feature extraction (AFE) datasets: Ecopia, Tier 1, and Tier 2, at extracting building footprints and roads from high resolution satellite imagery as compared to manual digitization of the same areas. To avoid environmental bias, this assessment was done in two different regions of the world: Maracay, Venezuela and Niamey, Niger. High, medium, and low urban density sites are compared between regions. We quantify the accuracy of the data and time needed to correct the three AFE datasets against hand digitized reference data across ninety tiles in each city, selected by stratified random sampling. Within each tile, the reference data was compared against the three AFE datasets, both before and after analyst editing, using the accuracy assessment metrics of Intersection over Union and F1 Score for buildings and roads, as well as Average Path Length Similarity (APLS) to measure road network connectivity. It was found that of the three AFE tested, the Ecopia data most frequently outperformed the other AFE in accuracy and reduced the time needed for editing.


2020 ◽  
Vol 12 (20) ◽  
pp. 8435
Author(s):  
Zitian Guo ◽  
Chunmei Wang ◽  
Xin Liu ◽  
Guowei Pang ◽  
Mengyang Zhu ◽  
...  

Land cover information plays an essential role in the study of global surface change. Multiple land cover datasets have been produced to meet various application needs. The FROM-GLC30 (Finer Resolution Observation and Monitoring of Global Land Cover) dataset is one of the latest land cover products with a resolution of 30 m, which is a relatively high resolution among global public datasets, and the accuracy of this dataset is of great concern in many related researches. The objective of this study was to calculate the accuracy of the FROM-GLC30 2017 dataset at the continental scale and to explore the spatial variation differences of each land type accuracy in different regions. In this study, the visual interpretation land cover results at 20,936 small watershed sampling units based on high-resolution remote sensing images were used as the reference data covering 65 countries in Asia, Europe, and Africa. The reference data were verified by field survey in typical watersheds. Based on that, the accuracy assessment of the FROM-GLC30 2017 dataset was carried out. The results showed (1) the area proportion of different land cover types in the FROM-GLC30 2017 dataset was generally consistent with that of the reference data. (2) The overall accuracy of the FROM-GLC30 2017 dataset was 72.78%, and was highest in West Asia–Northeast Africa, and lowest in South Asia. (3) Among all the seven land cover types, the accuracy of bareland and forest was relatively higher than that of others, and the accuracy of shrubland was the lowest. The accuracy for each land cover type differed among regions. The results of this work can provide useful information for land cover accuracy assessment researches at a large scale and promote the further practical applications of the open-source land cover datasets.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2014 ◽  
Vol 31 (2) ◽  
Author(s):  
Mariela Gabioux ◽  
Vladimir Santos da Costa ◽  
Joao Marcos Azevedo Correia de Souza ◽  
Bruna Faria de Oliveira ◽  
Afonso De Moraes Paiva

Results of the basic model configuration of the REMO project, a Brazilian approach towards operational oceanography, are discussed. This configuration consists basically of a high-resolution eddy-resolving, 1/12 degree model for the Metarea V, nested in a medium-resolution eddy-permitting, 1/4 degree model of the Atlantic Ocean. These simulations performed with HYCOM model, aim for: a) creating a basic set-up for implementation of assimilation techniques leading to ocean prediction; b) the development of hydrodynamics bases for environmental studies; c) providing boundary conditions for regional domains with increased resolution. The 1/4 degree simulation was able to simulate realistic equatorial and south Atlantic large scale circulation, both the wind-driven and the thermohaline components. The high resolution simulation was able to generate mesoscale and represent well the variability pattern within the Metarea V domain. The BC mean transport values were well represented in the southwestern region (between Vitória-Trinidade sea mount and 29S), in contrast to higher latitudes (higher than 30S) where it was slightly underestimated. Important issues for the simulation of the South Atlantic with high resolution are discussed, like the ideal place for boundaries, improvements in the bathymetric representation and the control of bias SST, by the introducing of a small surface relaxation. In order to make a preliminary assessment of the model behavior when submitted to data assimilation, the Cooper & Haines (1996) method was used to extrapolate SSH anomalies fields to deeper layers every 7 days, with encouraging results.


Land ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 193
Author(s):  
Ali Alghamdi ◽  
Anthony R. Cummings

The implications of change on local processes have attracted significant research interest in recent times. In urban settings, green spaces and forests have attracted much attention. Here, we present an assessment of change within the predominantly desert Middle Eastern city of Riyadh, an understudied setting. We utilized high-resolution SPOT 5 data and two classification techniques—maximum likelihood classification and object-oriented classification—to study the changes in Riyadh between 2004 and 2014. Imagery classification was completed with training data obtained from the SPOT 5 dataset, and an accuracy assessment was completed through a combination of field surveys and an application developed in ESRI Survey 123 tool. The Survey 123 tool allowed residents of Riyadh to present their views on land cover for the 2004 and 2014 imagery. Our analysis showed that soil or ‘desert’ areas were converted to roads and buildings to accommodate for Riyadh’s rapidly growing population. The object-oriented classifier provided higher overall accuracy than the maximum likelihood classifier (74.71% and 73.79% vs. 92.36% and 90.77% for 2004 and 2014). Our work provides insights into the changes within a desert environment and establishes a foundation for understanding change in this understudied setting.


2020 ◽  
Vol 34 (10) ◽  
pp. 13849-13850
Author(s):  
Donghyeon Lee ◽  
Man-Je Kim ◽  
Chang Wook Ahn

In a real-time strategy (RTS) game, StarCraft II, players need to know the consequences before making a decision in combat. We propose a combat outcome predictor which utilizes terrain information as well as squad information. For training the model, we generated a StarCraft II combat dataset by simulating diverse and large-scale combat situations. The overall accuracy of our model was 89.7%. Our predictor can be integrated into the artificial intelligence agent for RTS games as a short-term decision-making module.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Chris E. Blenkinsopp ◽  
Paul M. Bayle ◽  
Daniel C. Conley ◽  
Gerd Masselink ◽  
Emily Gulson ◽  
...  

A Correction to this paper has been published: https://doi.org/10.1038/s41597-021-00874-2.


Sign in / Sign up

Export Citation Format

Share Document