Photogrammetric Engineering & Remote Sensing
Latest Publications


TOTAL DOCUMENTS

1637
(FIVE YEARS 278)

H-INDEX

91
(FIVE YEARS 4)

Published By American Society For Photogrammetry And Remote Sensing

0099-1112

2022 ◽  
Vol 88 (1) ◽  
pp. 9-10
Author(s):  
Stefan Hinz ◽  
Andreas Braun ◽  
Martin Weinmann

2022 ◽  
Vol 88 (1) ◽  
pp. 17-28
Author(s):  
Qing Ding ◽  
Zhenfeng Shao ◽  
Xiao Huang ◽  
Orhan Altan ◽  
Yewen Fan

Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.


2022 ◽  
Vol 88 (1) ◽  
pp. 47-53
Author(s):  
Muhammad Nasar Ahmad ◽  
Zhenfeng Shao ◽  
Orhan Altan

This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion of food chain and gross domestic product for Sindh, Pakistan.


2022 ◽  
Vol 88 (1) ◽  
pp. 65-72
Author(s):  
Wanxuan Geng ◽  
Weixun Zhou ◽  
Shuanggen Jin

Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.


2022 ◽  
Vol 88 (1) ◽  
pp. 39-46
Author(s):  
Xinyu Ding ◽  
Qunming Wang

Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring, the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time. To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.


2022 ◽  
Vol 88 (1) ◽  
pp. 29-38
Author(s):  
Clement E. Akumu ◽  
Eze O. Amadi

The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios: Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy, about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.


2022 ◽  
Vol 88 (1) ◽  
pp. 55-64
Author(s):  
Raechel A. Portelli ◽  
Paul Pope

Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.


2021 ◽  
Vol 87 (12) ◽  
pp. 913-922
Author(s):  
Ningning Zhu ◽  
Bisheng Yang ◽  
Zhen Dong ◽  
Chi Chen ◽  
Xia Huang ◽  
...  

To register mobile mapping system (MMS) lidar points and panoramic-image sequences, a relative orientation model of panoramic images (PROM) is proposed. The PROM is suitable for cases in which attitude or orientation parameters are unknown in the panoramic-image sequence. First, feature points are extracted and matched from panoramic-image pairs using the SURF algorithm. Second, these matched feature points are used to solve the relative attitude parameters in the PROM. Then, combining the PROM with the absolute position and attitude parameters of the initial panoramic image, the MMS lidar points and panoramic-image sequence are registered. Finally, the registration accuracy of the PROM method is assessed using corresponding points manually selected from the MMSlidar points and panoramic-image sequence. The results show that three types of MMSdata sources are registered accurately based on the proposed registration method. Our method transforms the registration of panoramic images and lidar points into image feature-point matching, which is suitable for diverse road scenes compared with existing methods.


Sign in / Sign up

Export Citation Format

Share Document