scholarly journals Urban Density Indices Using Mean Shift-Based Upsampled Elevetion Data

Author(s):  
E. Charou ◽  
S. Gyftakis ◽  
E. Bratsolis ◽  
T. Tsenoglou ◽  
Th. D. Papadopoulou ◽  
...  

Urban density is an important factor for several fields, e.g. urban design, planning and land management. Modern remote sensors deliver ample information for the estimation of specific urban land classification classes (2D indicators), and the height of urban land classification objects (3D indicators) within an Area of Interest (AOI). In this research, two of these indicators, Building Coverage Ratio (BCR) and Floor Area Ratio (FAR) are numerically and automatically derived from high-resolution airborne RGB orthophotos and LiDAR data. In the pre-processing step the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an improved normalized digital surface model (nDSM) is an upsampled elevation data with considerable improvement regarding region filling and “straightness” of elevation discontinuities. In a following step, a Multilayer Feedforward Neural Network (MFNN) is used to classify all pixels of the AOI to building or non-building categories. For the total surface of the block and the buildings we consider the number of their pixels and the surface of the unit pixel. Comparisons of the automatically derived BCR and FAR indicators with manually derived ones shows the applicability and effectiveness of the methodology proposed.

Author(s):  
J. Fagir ◽  
A. Schubert ◽  
M. Frioud ◽  
D. Henke

The fusion of synthetic aperture radar (SAR) and optical data is a dynamic research area, but image segmentation is rarely treated. While a few studies use low-resolution nadir-view optical images, we approached the segmentation of SAR and optical images acquired from the same airborne platform – leading to an oblique view with high resolution and thus increased complexity. To overcome the geometric differences, we generated a digital surface model (DSM) from adjacent optical images and used it to project both the DSM and SAR data into the optical camera frame, followed by segmentation with each channel. The fused segmentation algorithm was found to out-perform the single-channel version.


2019 ◽  
Vol 11 (18) ◽  
pp. 2128 ◽  
Author(s):  
Mugiraneza ◽  
Nascetti ◽  
Ban

The emergence of high-resolution satellite data, such as WorldView-2, has opened the opportunity for urban land cover mapping at fine resolution. However, it is not straightforward to map detailed urban land cover and to detect urban deprived areas, such as informal settlements, in complex urban environments based merely on high-resolution spectral features. Thus, approaches integrating hierarchical segmentation and rule-based classification strategies can play a crucial role in producing high quality urban land cover maps. This study aims to evaluate the potential of WorldView-2 high-resolution multispectral and panchromatic imagery for detailed urban land cover classification in Kigali, Rwanda, a complex urban area characterized by a subtropical highland climate. A multi-stage object-based classification was performed using support vector machines (SVM) and a rule-based approach to derive 12 land cover classes with the input of WorldView-2 spectral bands, spectral indices, gray level co-occurrence matrix (GLCM) texture measures and a digital terrain model (DTM). In the initial classification, confusion existed among the informal settlements, the high- and low-density built-up areas, as well as between the upland and lowland agriculture. To improve the classification accuracy, a framework based on a geometric ruleset and two newly defined indices (urban density and greenness density indices) were developed. The novel framework resulted in an overall classification accuracy at 85.36% with a kappa coefficient at 0.82. The confusion between high- and low-density built-up areas significantly decreased, while informal settlements were successfully extracted with the producer and user’s accuracies at 77% and 90% respectively. It was revealed that the integration of an object-based SVM classification of WorldView-2 feature sets and DTM with the geometric ruleset and urban density and greenness indices resulted in better class separability, thus higher classification accuracies in complex urban environments.


2019 ◽  
Vol 11 (7) ◽  
pp. 752 ◽  
Author(s):  
Zhongchang Sun ◽  
Ru Xu ◽  
Wenjie Du ◽  
Lei Wang ◽  
Dengsheng Lu

Accurate and timely urban land mapping is fundamental to supporting large area environmental and socio-economic research. Most of the available large-area urban land products are limited to a spatial resolution of 30 m. The fusion of optical and synthetic aperture radar (SAR) data for large-area high-resolution urban land mapping has not yet been widely explored. In this study, we propose a fast and effective urban land extraction method using ascending/descending orbits of Sentinel-1A SAR data and Sentinel-2 MSI (MultiSpectral Instrument, Level 1C) optical data acquired from 1 January 2015 to 30 June 2016. Potential urban land (PUL) was identified first through logical operations on yearly mean and standard deviation composites from a time series of ascending/descending orbits of SAR data. A Yearly Normalized Difference Vegetation Index (NDVI) maximum and modified Normalized Difference Water Index (MNDWI) mean composite were generated from Sentinel-2 imagery. The slope image derived from SRTM DEM data was used to mask mountain pixels and reduce the false positives in SAR data over these regions. We applied a region-specific threshold on PUL to extract the target urban land (TUL) and a global threshold on the MNDWI mean, and slope image to extract water bodies and high-slope regions. A majority filter with a three by three window was applied on previously extracted results and the main processing was carried out on the Google Earth Engine (GEE) platform. China was chosen as the testing region to validate the accuracy and robustness of our proposed method through 224,000 validation points randomly selected from high-resolution Google Earth imagery. Additionally, a total of 735 blocks with a size of 900 × 900 m were randomly selected and used to compare our product’s accuracy with the global human settlement layer (GHSL, 2014), GlobeLand30 (2010), and Liu (2015) products. Our method demonstrated the effectiveness of using a fusion of optical and SAR data for large area urban land extraction especially in areas where optical data fail to distinguish urban land from spectrally similar objects. Results show that the average overall, producer’s and user’s accuracies are 88.03%, 94.50% and 82.22%, respectively.


Author(s):  
Guoyuan Li ◽  
Xinming Tang ◽  
Xiaoming Gao ◽  
Chongyang Zhang ◽  
Tao Li

ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. <br><br> In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What’s more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.


Author(s):  
Guoyuan Li ◽  
Xinming Tang ◽  
Xiaoming Gao ◽  
Chongyang Zhang ◽  
Tao Li

ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. &lt;br&gt;&lt;br&gt; In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What’s more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.


Author(s):  
Devrim Akca ◽  
Efstratios Stylianidis ◽  
Konstantinos Smagas ◽  
Martin Hofer ◽  
Daniela Poli ◽  
...  

Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView – 3, SPOT – 5 HRS, SPOT – 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.


2021 ◽  
Author(s):  
Sébastien Saunier

&lt;p&gt;In this paper, the authors propose to describe the methodologies developed for the validation of Very High-Resolution (VHR) optical missions within the Earthnet Data Assessment Pilot (EDAP) Framework.&amp;#160; The use of surface-based, drone, airborne, and/or space-based observations to build calibration reference is playing a fundamental role in the validation process. A rigorous validation process must compare mission data products with independent reference data suitable for the satellite measurements. As a consequence, one background activity within EDAP is the collection, the consolidation of reference data of various nature depending on the validation methodology.&lt;/p&gt;&lt;p&gt;The validation methodologies are conventionally divided into three categories; i.e. validations of the measurement, the geometry and the image quality. The validation of the measurement requires an absolute calibration reference. This latter on is built up by using either in situ measurements collected with RadCalNet[1] stations or by using space based observations performed with &amp;#8220;gold&amp;#8221; mission (Sentinel-2, Landsat-8) over Pseudo Invariant Calibration Site (PICS). For the geometric validation, several test sites have been set up. A test site is equipped with data from different reference sources. The full usability of a test site is not systematic. It depends on the validation metrics and the specifications of the sensor, particularly the spatial resolution and image acquisition geometry. Some existing geometric sites are equipped with Ground Control Point (GCP) set surveyed by using Global Navigation Satellite System (GNSS) devices in the field.&amp;#160; In some cases, the GCP set comes in support to the refinement of an image observed with drones in order to produce a raster reference, subsequently used to validate the internal geometry of images under assessment. Besides, a limiting factor in the usage of VHR optical ortho-rectified data is the accuracy of the Digital Surface Model (DSM) / Digital Terrain Model (DTM). In order to separate errors due to terrain elevation and error due to the sensor itself, some test sites are also equipped with very accurate Light Detection and Ranging (LIDAR) data.&lt;/p&gt;&lt;p&gt;The validation of image quality address all aspect related to the spatial resolution and is strongly linked to both the measurement and the geometry. The image quality assessments are performed with both qualitative and quantitative approaches. The quantitative approach relies on the analysis of artificial ground target images and lead to the estimate of Modulation Transfer Function (MTF) together with additional image quality parameters such as Signal to Noise Ratio (SNR). On the other hand, the qualitative approach assesses the interpretability of input images and leads to a rating scaling[2] which is strongly related to the sensor Ground Resolution Distance (GRD). This visual inspection task required a database including very detailed image of man-made objects. This database is considered within EDAP as a reference.&lt;/p&gt;&lt;div&gt; &lt;div&gt; &lt;p&gt;[1] https://www.radcalnet.org&lt;/p&gt; &lt;/div&gt; &lt;div&gt; &lt;p&gt;[2] https://fas.org/irp/imint/niirs.htm&lt;/p&gt; &lt;/div&gt; &lt;/div&gt;


2020 ◽  
Vol 12 (22) ◽  
pp. 3797
Author(s):  
David Radke ◽  
Daniel Radke ◽  
John Radke

Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET’s architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 1×1 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times.


2021 ◽  
Vol 11 (13) ◽  
pp. 6072
Author(s):  
Nicla Maria Notarangelo ◽  
Arianna Mazzariello ◽  
Raffaele Albano ◽  
Aurelia Sole

Automatic building extraction from high-resolution remotely sensed data is a major area of interest for an extensive range of fields (e.g., urban planning, environmental risk management) but challenging due to urban morphology complexity. Among the different methods proposed, the approaches based on supervised machine learning (ML) achieve the best results. This paper aims to investigate building footprint extraction using only high-resolution raster digital surface model (DSM) data by comparing the performance of three different popular supervised ML models on a benchmark dataset. The first two methods rely on a histogram of oriented gradients (HOG) feature descriptor and a classical ML (support vector machine (SVM)) or a shallow neural network (extreme learning machine (ELM)) classifier, and the third model is a fully convolutional network (FCN) based on deep learning with transfer learning. Used data were obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) and cover the urban areas of Vaihingen an der Enz, Potsdam, and Toronto. The results indicated that performances of models based on shallow ML (feature extraction and classifier training) are affected by the urban context investigated (F1 scores from 0.49 to 0.81), whereas the FCN-based model proved to be the most robust and best-performing method for building extraction from a high-resolution raster DSM (F1 scores from 0.80 to 0.86).


Sign in / Sign up

Export Citation Format

Share Document