A Unique Airborne Multi-Angular Data Set for Different Applications in Remote Sensing

Author(s):  
Charles Gatebe ◽  
Rajesh Poudyal
Keyword(s):  
Data Set ◽  
2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2012 ◽  
Vol 518-523 ◽  
pp. 5697-5703
Author(s):  
Zhao Yan Liu ◽  
Ling Ling Ma ◽  
Ling Li Tang ◽  
Yong Gang Qian

The aim of this study is to assess the capability of estimating Leaf Area Index (LAI) from high spatial resolution multi-angular Vis-NIR remote sensing data of WiDAS (Wide-Angle Infrared Dual-mode Line/Area Array Scanner) imaging system by inverting the coupled radiative transfer models PROSPECT-SAILH. Based on simulations from SAILH canopy reflectance model and PROSPECT leaf optical properties model, a Look-up Table (LUT) which describes the relationship between multi-angular canopy reflectance and LAI has been produced. Then the LAI can be retrieved from LUT by directly matching canopy reflectance of six view directions and four spectral bands with LAI. The inversion results are validated by field data, and by comparing the retrieval results of single-angular remote sensing data with multi-angular remote sensing data, we can found that the view angle takes the obvious impact on the LAI retrieval of single-angular data and that high accurate LAI can be obtained from the high resolution multi-angular remote sensing technology.


2014 ◽  
Vol 7 (9) ◽  
pp. 3095-3112 ◽  
Author(s):  
P. Sawamura ◽  
D. Müller ◽  
R. M. Hoff ◽  
C. A. Hostetler ◽  
R. A. Ferrare ◽  
...  

Abstract. Retrievals of aerosol microphysical properties (effective radius, volume and surface-area concentrations) and aerosol optical properties (complex index of refraction and single-scattering albedo) were obtained from a hybrid multiwavelength lidar data set for the first time. In July 2011, in the Baltimore–Washington DC region, synergistic profiling of optical and microphysical properties of aerosols with both airborne (in situ and remote sensing) and ground-based remote sensing systems was performed during the first deployment of DISCOVER-AQ. The hybrid multiwavelength lidar data set combines ground-based elastic backscatter lidar measurements at 355 nm with airborne High-Spectral-Resolution Lidar (HSRL) measurements at 532 nm and elastic backscatter lidar measurements at 1064 nm that were obtained less than 5 km apart from each other. This was the first study in which optical and microphysical retrievals from lidar were obtained during the day and directly compared to AERONET and in situ measurements for 11 cases. Good agreement was observed between lidar and AERONET retrievals. Larger discrepancies were observed between lidar retrievals and in situ measurements obtained by the aircraft and aerosol hygroscopic effects are believed to be the main factor in such discrepancies.


Author(s):  
Gordana Kaplan ◽  
Ugur Avdan

Wetlands benefits can be summarized but are not limited to their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Over the past few decades, remote sensing and geographical information technologies has proven to be a useful and frequent applications in monitoring and mapping wetlands. Combining both optical and microwave satellite data can give significant information about the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing data from different sensors, such as radar and optical remote sensing data, can increase the wetland classification accuracy. In this paper we investigate the ability of fusion two fine spatial resolution satellite data, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, for mapping wetlands. As a study area in this paper, Balikdami wetland located in the Anatolian part of Turkey has been selected. Both Sentinel-1 and Sentinel-2 images require pre-processing before their use. After the pre-processing, several vegetation indices calculated from the Sentinel-2 bands were included in the data set. Furthermore, an object-based classification was performed. For the accuracy assessment of the obtained results, number of random points were added over the study area. In addition, the results were compared with data from Unmanned Aerial Vehicle collected on the same data of the overpass of the Sentinel-2, and three days before the overpass of Sentinel-1 satellite. The accuracy assessment showed that the results significant and satisfying in the wetland classification using both multispectral and microwave data. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, with an overall classification accuracy of approximately 90% in the object-based classification. Compared with the high resolution UAV data, the classification results give promising results for mapping and monitoring not just wetlands, but also the sub-classes of the study area. For future research, multi-temporal image use and terrain data collection are recommended.


1987 ◽  
Vol 9 ◽  
pp. 45-49 ◽  
Author(s):  
M.J. Clark ◽  
A.M. Gurnell ◽  
P.J. Hancock

Remote-sensing research in glacial and pro-glacial environments raises several methodological problems relating to the handling of ground and satellite radiometric data. An evaluation is undertaken of the use of ground radiometry to elucidate properties of relevant surface types in order to interpret satellite imagery. It identifies the influence that geometric correction and re-sampling have on the radiometric purity of the resulting data set. Methodological problems inherent in deriving catchment terrain characteristics are discussed with reference to currently glacierized and pro-glacial zones of south-western Switzerland.


2021 ◽  
Vol 13 (19) ◽  
pp. 3956
Author(s):  
Shan He ◽  
Huaiyong Shao ◽  
Wei Xian ◽  
Shuhui Zhang ◽  
Jialong Zhong ◽  
...  

Hilly areas are important parts of the world’s landscape. A marginal phenomenon can be observed in some hilly areas, leading to serious land abandonment. Extracting the spatio-temporal distribution of abandoned land in such hilly areas can protect food security, improve people’s livelihoods, and serve as a tool for a rational land plan. However, mapping the distribution of abandoned land using a single type of remote sensing image is still challenging and problematic due to the fragmentation of such hilly areas and severe cloud pollution. In this study, a new approach by integrating Linear stretch (Ls), Maximum Value Composite (MVC), and Flexible Spatiotemporal DAta Fusion (FSDAF) was proposed to analyze the time-series changes and extract the spatial distribution of abandoned land. MOD09GA, MOD13Q1, and Sentinel-2 were selected as the basis of remote sensing images to fuse a monthly 10 m spatio-temporal data set. Three pieces of vegetation indices (VIs: ndvi, savi, ndwi) were utilized as the measures to identify the abandoned land. A multiple spatio-temporal scales sample database was established, and the Support Vector Machine (SVM) was used to extract abandoned land from cultivated land and woodland. The best extraction result with an overall accuracy of 88.1% was achieved by integrating Ls, MVC, and FSDAF, with the assistance of an SVM classifier. The fused VIs image set transcended the single source method (Sentinel-2) with greater accuracy by a margin of 10.8–23.6% for abandoned land extraction. On the other hand, VIs appeared to contribute positively to extract abandoned land from cultivated land and woodland. This study not only provides technical guidance for the quick acquirement of abandoned land distribution in hilly areas, but it also provides strong data support for the connection of targeted poverty alleviation to rural revitalization.


2021 ◽  
Author(s):  
Meng Chen ◽  
Jianjun Wu ◽  
Feng Tian

<p>Automatically extracting buildings from remote sensing images (RSI) plays important roles in urban planning, population estimation, disaster emergency response, etc. With the development of deep learning technology, convolutional neural networks (CNN) with better performance than traditional methods have been widely used in extracting buildings from remote sensing imagery (RSI). But it still faces some problems. First of all, low-level features extracted by shallow layers and abstract features extracted by deep layers of the artificial neural network could not be fully fused. it makes building extraction is often inaccurate, especially for buildings with complex structures, irregular shapes and small sizes. Secondly, there are so many parameters that need to be trained in a network, which occupies a lot of computing resources and consumes a lot of time in the training process. By analyzing the structure of the CNN, we found that abstract features extracted by deep layers with low geospatial resolution contain more semantic information. These abstract features are conducive to determine the category of pixels while not sensitive to the boundaries of the buildings. We found the stride of the convolution kernel and pooling operation reduced the geospatial resolution of feature maps, so, this paper proposed a simple and effective strategy—reduce the stride of convolution kernel contains in one of the layers and reduced the number of convolutional kernels to alleviate the above two bottlenecks. This strategy was used to deeplabv3+net and the experimental results for both the WHU Building Dataset and Massachusetts Building Dataset. Compared with the original deeplabv3+net the result showed that this strategy has a better performance. In terms of WHU building data set, the Intersection over Union (IoU) increased by 1.4% and F1 score increased by 0.9%; in terms of Massachusetts Building Dataset, IoU increased by 3.31% and F1 score increased by 2.3%.</p>


2014 ◽  
Vol 7 (8) ◽  
pp. 2757-2773 ◽  
Author(s):  
M. Costa-Surós ◽  
J. Calbó ◽  
J. A. González ◽  
C. N. Long

Abstract. The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, are important characteristics in order to describe the impact of clouds on climate. In this work, several methods for estimating the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering the number and position of cloud layers, with a ground-based system that is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ in the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study, these methods are applied to 193 radiosonde profiles acquired at the Atmospheric Radiation Measurement (ARM) Southern Great Plains site during all seasons of the year 2009 and endorsed by Geostationary Operational Environmental Satellite (GOES) images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The perfect agreement (i.e., when the whole CVS is estimated correctly) for the methods ranges between 26 and 64%; the methods show additional approximate agreement (i.e., when at least one cloud layer is assessed correctly) from 15 to 41%. Further tests and improvements are applied to one of these methods. In addition, we attempt to make this method suitable for low-resolution vertical profiles, like those from the outputs of reanalysis methods or from the World Meteorological Organization's (WMO) Global Telecommunication System. The perfect agreement, even when using low-resolution profiles, can be improved by up to 67% (plus 25% of the approximate agreement) if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.


2020 ◽  
Vol 12 (19) ◽  
pp. 3190
Author(s):  
Xiaolong Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Haibo Wang ◽  
Kaihan Dong ◽  
...  

Cloud pixels have massively reduced the utilization of optical remote sensing images, highlighting the importance of cloud detection. According to the current remote sensing literature, methods such as the threshold method, statistical method and deep learning (DL) have been applied in cloud detection tasks. As some cloud areas are translucent, areas blurred by these clouds still retain some ground feature information, which blurs the spectral or spatial characteristics of these areas, leading to difficulty in accurate detection of cloud areas by existing methods. To solve the problem, this study presents a cloud detection method based on genetic reinforcement learning. Firstly, the factors that directly affect the classification of pixels in remote sensing images are analyzed, and the concept of pixel environmental state (PES) is proposed. Then, PES information and the algorithm’s marking action are integrated into the “PES-action” data set. Subsequently, the rule of “reward–penalty” is introduced and the “PES-action” strategy with the highest cumulative return is learned by a genetic algorithm (GA). Clouds can be detected accurately through the learned “PES-action” strategy. By virtue of the strong adaptability of reinforcement learning (RL) to the environment and the global optimization ability of the GA, cloud regions are detected accurately. In the experiment, multi-spectral remote sensing images of SuperView-1 were collected to build the data set, which was finally accurately detected. The overall accuracy (OA) of the proposed method on the test set reached 97.15%, and satisfactory cloud masks were obtained. Compared with the best DL method disclosed and the random forest (RF) method, the proposed method is superior in precision, recall, false positive rate (FPR) and OA for the detection of clouds. This study aims to improve the detection of cloud regions, providing a reference for researchers interested in cloud detection of remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document