scholarly journals Active 3D Imaging of Vegetation Based on Multi-Wavelength Fluorescence LiDAR

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 935
Author(s):  
Xingmin Zhao ◽  
Shuo Shi ◽  
Jian Yang ◽  
Wei Gong ◽  
Jia Sun ◽  
...  

Comprehensive and accurate vegetation monitoring is required in forestry and agricultural applications. The optical remote sensing method could be a solution. However, the traditional light detection and ranging (LiDAR) scans a surface to create point clouds and provide only 3D-state information. Active laser-induced fluorescence (LIF) only measures the photosynthesis and biochemical status of vegetation and lacks information about spatial structures. In this work, we present a new Multi-Wavelength Fluorescence LiDAR (MWFL) system. The system extended the multi-channel fluorescence detection of LIF on the basis of the LiDAR scanning and ranging mechanism. Based on the principle prototype of the MWFL system, we carried out vegetation-monitoring experiments in the laboratory. The results showed that MWFL simultaneously acquires the 3D spatial structure and physiological states for precision vegetation monitoring. Laboratory experiments on interior scenes verified the system’s performance. Fluorescence point cloud classification results were evaluated at four wavelengths and by comparing them with normal vectors, to assess the MWFL system capabilities. The overall classification accuracy and Kappa coefficient increased from 70.7% and 0.17 at the single wavelength to 88.9% and 0.75 at four wavelengths. The overall classification accuracy and Kappa coefficient improved from 76.2% and 0.29 at the normal vectors to 92.5% and 0.84 at the normal vectors with four wavelengths. The study demonstrated that active 3D fluorescence imaging of vegetation based on the MWFL system has a great application potential in the field of remote sensing detection and vegetation monitoring.

2013 ◽  
Vol 726-731 ◽  
pp. 4664-4667 ◽  
Author(s):  
Rui Ying Hao ◽  
Rui Hong Yu

Changes in spatial patterns of hydrophyte of Wuliangsuhai wetland since 1986 are analyzed based on the remote sensing principle. The methods of computer automated classification combining with visual interpretation are used for the extraction of wetland information. Furthermore, overall accuracy and kappa coefficient are applied to the assessment of classification accuracy. The results show the macrophyte is the dominant type in the Wuliangsuhai wetland.


2020 ◽  
Author(s):  
Lei Wang ◽  
Haoran Sun ◽  
Wenjun Li ◽  
Liang Zhou

<p>Crop planting structure is of great significance to the quantitative management of agricultural water and the accurate estimation of crop yield. With the increasing spatial and temporal resolution of remote sensing optical and SAR(Synthetic Aperture Radar) images,  efficient crop mapping in large area becomes possible and the accuracy is improved. In this study, Qingyijiang Irrigation District in southwest of China is selected for crop identification methods comparison, which has heterogeneous terrain and complex crop structure . Multi-temporal optical (Sentinel-2) and SAR (Sentinel-1) data were used to calculate NDVI and backscattering coefficient as the main classification indexes. The multi-spectral and SAR data showed significant change in different stages of the whole crop growth period and varied with different crop types. Spatial distribution and texture analysis was also made. Classification using different combinations of indexes were performed using neural network, support vector machine and random forest method. The results showed that, the use of multi-temporal optical data and SAR data in the key growing periods of main crops can both provide satisfactory classification accuracy. The overall classification accuracy was greater than 82% and Kappa coefficient was greater than 0.8. SAR data has high accuracy and much potential in rice identification. However optical data had more accuracy in upland crops classification. In addition, the classification accuracy can be effectively improved by combination of classification indexes from optical and SAR data, the overall accuracy was up to 91.47%. The random forest method was superior to the other two methods in terms of the overall accuracy and the kappa coefficient.</p>


2021 ◽  
Vol 13 (9) ◽  
pp. 1699
Author(s):  
Lingxiao Gu ◽  
Yanmin Shuai ◽  
Congying Shao ◽  
Donghui Xie ◽  
Qingling Zhang ◽  
...  

Optical remote sensing indices play an important role in vegetation information extraction and have been widely serving ecology, agriculture and forestry, urban monitoring, and other communities. Remote sensing indices are constructed from individual bands depending on special characteristics to enhance the typical spectral features for the identification or distinction of surface land covers. With the development of quantitative remote sensing, there is a rapidly increasing requirement for accurate data processing and modeling. It is well known that the geometry-induced variation observed on surface reflectance is not ignorable, but the situation of uncertainty thereby introduced into these indices still needs further detailed understanding. We adopted the ground multi-angle hyperspectrum, spectral response function (SRF) of Thematic Mapper (TM), Enhanced Thematic Mapper (ETM+), Operational Land Imager (OLI), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Multi-Spectral Instrument (MSI) optical sensors and simulated their sensor-like spectral reflectance; then, we investigated the potential angle effect uncertainty on optical indices that have been frequently involved in vegetation monitoring and examined the forward/backward effect over both the ground-based level and the actual Landsat TM/ETM+ overlapped region. Our results on the discussed indices and sensors show the following: (1) Identifiable angle effects exist with a more elevated influence than that introduced by band difference among sensors; (2) The absolute difference of forward and backward direction can reach up to −0.03 to 0.1 within bands of the TM/ETM+ overlapped region; (3) The investigation at ground level indicates that there are different variations of angle effect transmitted to each remote sensing index. Regarding cases of crop canopy at various growth phases, most of the discussed indices have more than a 20% relative difference in nadir value except Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI), at less than 10%, and Normalized Burn Ratio (NBR) at less than 16%. For the case of wax maturity stage, the relative difference in nadir value of Enhanced Vegetation Index (EVI), Soil-Adjusted Vegetation Index (SAVI), Ratio Vegetation Index (RVI), Char Soil Index (CSI), NBR, Normalized Difference Moisture Index (NDMI), and SWIR2/NIR exceeded 50%, among which the values for NBR and NDMI reached up to 115.8% and 206.7%, respectively; (4) Various schemes of index construction imply different developments of angle effect uncertainty. The “difference” indices can partially suppress the directional influence, while the “ratio” indices show high potential to amplify the angle effect. This study reveals that the angle-induced uncertainty of these indices is greater than that induced by the spectrum mismatch among sensors, especially under the senescence period. In addition, based on this work, indices with a suppressed potential of angle effect are recommended for vegetation monitoring or information retrieval to avoid unexpected effects.


2021 ◽  
Vol 13 (13) ◽  
pp. 2516
Author(s):  
Zhuangwei Jing ◽  
Haiyan Guan ◽  
Peiran Zhao ◽  
Dilong Li ◽  
Yongtao Yu ◽  
...  

A multispectral light detection and ranging (LiDAR) system, which simultaneously collects spatial geometric data and multi-wavelength intensity information, opens the door to three-dimensional (3-D) point cloud classification and object recognition. Because of the irregular distribution property of point clouds and the massive data volume, point cloud classification directly from multispectral LiDAR data is still challengeable and questionable. In this paper, a point-wise multispectral LiDAR point cloud classification architecture termed as SE-PointNet++ is proposed via integrating a Squeeze-and-Excitation (SE) block with an improved PointNet++ semantic segmentation network. PointNet++ extracts local features from unevenly sampled points and represents local geometrical relationships among the points through multi-scale grouping. The SE block is embedded into PointNet++ to strengthen important channels to increase feature saliency for better point cloud classification. Our SE-PointNet++ architecture has been evaluated on the Titan multispectral LiDAR test datasets and achieved an overall accuracy, a mean Intersection over Union (mIoU), an F1-score, and a Kappa coefficient of 91.16%, 60.15%, 73.14%, and 0.86, respectively. Comparative studies with five established deep learning models confirmed that our proposed SE-PointNet++ achieves promising performance in multispectral LiDAR point cloud classification tasks.


Author(s):  
G. Kaplan ◽  
U. Avdan

Wetlands provide a number of environmental and socio-economic benefits such as their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Remote sensing technology has proven to be a useful and frequent application in monitoring and mapping wetlands. Combining optical and microwave satellite data can help with mapping and monitoring the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing radar and optical remote sensing data can increase the wetland classification accuracy.<br> In this paper, data from the fine spatial resolution optical satellite, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, were fused for mapping wetlands. Both Sentinel-1 and Sentinel-2 images were pre-processed. After the pre-processing, vegetation indices were calculated using the Sentinel-2 bands and the results were included in the fusion data set. For the classification of the fused data, three different classification approaches were used and compared.<br> The results showed significant improvement in the wetland classification using both multispectral and microwave data. Also, the presence of the red edge bands and the vegetation indices used in the data set showed significant improvement in the discrimination between wetlands and other vegetated areas. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, showing an overall classification accuracy of approximately 90&amp;thinsp;% in the object-based classification method. For future research, we recommend multi-temporal image use, terrain data collection, as well as a comparison of the used method with the traditional image fusion techniques.


Author(s):  
M. Khoshboresh Masouleh ◽  
M. R. Saradjian

Abstract. Building footprint extraction (BFE) from multi-sensor data such as optical images and light detection and ranging (LiDAR) point clouds is widely used in various fields of remote sensing applications. However, it is still challenging research topic due to relatively inefficient building extraction techniques from variety of complex scenes in multi-sensor data. In this study, we develop and evaluate a deep competition network (DCN) that fuses very high spatial resolution optical remote sensing images with LiDAR data for robust BFE. DCN is a deep superpixelwise convolutional encoder-decoder architecture using the encoder vector quantization with classified structure. DCN consists of five encoding-decoding blocks with convolutional weights for robust binary representation (superpixel) learning. DCN is trained and tested in a big multi-sensor dataset obtained from the state of Indiana in the United States with multiple building scenes. Comparison results of the accuracy assessment showed that DCN has competitive BFE performance in comparison with other deep semantic binary segmentation architectures. Therefore, we conclude that the proposed model is a suitable solution to the robust BFE from big multi-sensor data.


Sign in / Sign up

Export Citation Format

Share Document