scholarly journals Extraction of Spectral Information from Airborne 3D Data for Assessment of Tree Species Proportions

2021 ◽  
Vol 13 (4) ◽  
pp. 720
Author(s):  
Jonas Bohlin ◽  
Jörgen Wallerman ◽  
Johan E. S. Fransson

With the rapid development of photogrammetric software and accessible camera technology, land surveys and other mapping organizations now provide various point cloud and digital surface model products from aerial images, often including spectral information. In this study, methods for colouring the point cloud and the importance of different metrics were compared for tree species-specific estimates at a coniferous hemi-boreal test site in southern Sweden. A total of three different data sets of aerial image-based products and one multi-spectral lidar data set were used to estimate tree species-specific proportion and stem volume using an area-based approach. Metrics were calculated for 156 field plots (10 m radius) from point cloud data and used in a Random Forest analysis. Plot level accuracy was evaluated using leave-one-out cross-validation. The results showed small differences in estimation accuracy of species-specific variables between the colouring methods. Simple averages of the spectral metrics had the highest importance and using spectral data from two seasons improved species prediction, especially deciduous proportion. Best tree species-specific proportion was estimated using multi-spectral lidar with 0.22 root mean square error (RMSE) for pine, 0.22 for spruce and 0.16 for deciduous. Corresponding RMSE for aerial images was 0.24, 0.23 and 0.20 for pine, spruce and deciduous, respectively. For the species-specific stem volume at plot level using image data, the RMSE in percent of surveyed mean was 129% for pine, 60% for spruce and 118% for deciduous.

2012 ◽  
Vol 11 ◽  
pp. 7-13
Author(s):  
Dilli Raj Bhandari

The automatic extraction of the objects from airborne laser scanner data and aerial images has been a topic of research for decades. Airborne laser scanner data are very efficient source for the detection of the buildings. Half of the world population lives in urban/suburban areas, so detailed, accurate and up-to-date building information is of great importance to every resident, government agencies, and private companies. The main objective of this paper is to extract the features for the detection of building using airborne laser scanner data and aerial images. To achieve this objective, a method of integration both LiDAR and aerial images has been explored: thus the advantages of both data sets are utilized to derive the buildings with high accuracy. Airborne laser scanner data contains accurate elevation information in high resolution which is very important feature to detect the elevated objects like buildings and the aerial image has spectral information and this spectral information is an appropriate feature to separate buildings from the trees. Planner region growing segmentation of LiDAR point cloud has been performed and normalized digital surface model (nDSM) is obtained by subtracting DTM from the DSM. Integration of the nDSM, aerial images and the segmented polygon features from the LiDAR point cloud has been carried out. The optimal features for the building detection have been extracted from the integration result. Mean height value of the nDSM, Normalized difference vegetation index (NDVI) and the standard deviation of the nDSM are the effective features. The accuracy assessment of the classification results obtained using the calculated attributes was done. Assessment result yielded an accuracy of almost 92 % explaining the features which are extracted by integrating the two data sets was large extent, effective for the automatic detection of the buildings.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


2020 ◽  
Vol 12 (21) ◽  
pp. 3630
Author(s):  
Jin Liu ◽  
Haokun Zheng

Object detection and recognition in aerial and remote sensing images has become a hot topic in the field of computer vision in recent years. As these images are usually taken from a bird’s-eye view, the targets often have different shapes and are densely arranged. Therefore, using an oriented bounding box to mark the target is a mainstream choice. However, this general method is designed based on horizontal box annotation, while the improved method for detecting an oriented bounding box has a high computational complexity. In this paper, we propose a method called ellipse field network (EFN) to organically integrate semantic segmentation and object detection. It predicts the probability distribution of the target and obtains accurate oriented bounding boxes through a post-processing step. We tested our method on the HRSC2016 and DOTA data sets, achieving mAP values of 0.863 and 0.701, respectively. At the same time, we also tested the performance of EFN on natural images and obtained a mAP of 84.7 in the VOC2012 data set. These extensive experiments demonstrate that EFN can achieve state-of-the-art results in aerial image tests and can obtain a good score when considering natural images.


Author(s):  
D. Frommholz ◽  
M. Linkiewicz ◽  
H. Meissner ◽  
D. Dahlke

This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.


Author(s):  
C. Chen ◽  
W. Gong ◽  
Y. Hu ◽  
Y. Chen ◽  
Y. Ding

The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.


Author(s):  
Z. Hussnain ◽  
S. Oude Elberink ◽  
G. Vosselman

<p><strong>Abstract.</strong> In this paper, a method is presented to improve the MLS platform’s trajectory for GNSS denied areas. The method comprises two major steps. The first step is based on a 2D image registration technique described in our previous publication. Internally, this registration technique first performs aerial to aerial image matching, this issues correspondences which enable to compute the 3D tie points by multiview triangulation. Similarly, it registers the rasterized Mobile Laser Scanning Point Cloud (MLSPC) patches with the multiple related aerial image patches. The later registration provides the correspondence between the aerial to aerial tie points and the MLSPC’s 3D points. In the second step, which is described in this paper, a procedure utilizes three kinds of observations to improve the MLS platform’s trajectory. The first type of observation is the set of 3D tie points computed automatically in the previous step (and are already available), the second type of observation is based on IMU readings and the third type of observation is soft-constraint over related pose parameters. In this situation, the 3D tie points are considered accurate and precise observations, since they provide both locally and globally strict constraints, whereas the IMU observations and soft-constraints only provide locally precise constraints. For 6DOF trajectory representation, first, the pose [R, t] parameters are converted to 6 B-spline functions over time. Then for the trajectory adjustment, the coefficients of B-splines are updated from the established observations. We tested our method on an MLS data set acquired at a test area in Rotterdam, and verified the trajectory improvement by evaluation with independently and manually measured GCPs. After the adjustment, the trajectory has achieved the accuracy of RMSE X<span class="thinspace"></span>=<span class="thinspace"></span>9<span class="thinspace"></span>cm, Y<span class="thinspace"></span>=<span class="thinspace"></span>14<span class="thinspace"></span>cm and Z<span class="thinspace"></span>=<span class="thinspace"></span>14<span class="thinspace"></span>cm. Analysing the error in the updated trajectory suggests that our procedure is effective at adjusting the 6DOF trajectory and to regenerate a reliable MLSPC product.</p>


Author(s):  
C. Yao ◽  
X. Zhang ◽  
H. Liu

The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65&amp;thinsp;%. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.


2013 ◽  
Vol 164 (4) ◽  
pp. 95-103 ◽  
Author(s):  
Lars T. Waser

Status and perspectives of country-wide tree species classification based on digital aerial images There is an increasing interest on area-wide and high-resolution data of forest composition. In Switzerland, tree species distribution will be considered periodically by the Swiss National Forest Inventory (NFI), but the claims will be only partly fulfilled by the existing forest type maps since they are relatively poor regarding spatial accuracy, updating, and reproducibility. Providing consistent, reproducible and up-to-date information on various forest parameters is the main advantage of using the latest remote sensing data and methods. New possibilities are given by the airborne digital sensor ADS80, which records the entire country during the vegetation season every six years. This paper presents a robust methodology of classifying tree species in different study areas. The obtained accuracies for beech, ash, Norway spruce, Scots pine, larch, willow and silver fir are in average 71–85%, but lower for other deciduous tree species. These are mainly less dominant tree species within a study area such as maple and birch. A small sample data set and shadows of other neighboring trees seem to be the main reasons for this. Based on the experiences made in this study, a country-wide classification of tree species has become more feasible. The usage of airborne digital sensor ADS80 data in combination with a high degree of automation from the developed methods will enable the generation of country-wide products on the distinction of coniferous and deciduous tree species until 2015.


2022 ◽  
Vol 14 (1) ◽  
pp. 238
Author(s):  
Binhan Luo ◽  
Jian Yang ◽  
Shalei Song ◽  
Shuo Shi ◽  
Wei Gong ◽  
...  

With the rapid modernization, many remote-sensing sensors were developed for classifying urban land and environmental monitoring. Multispectral LiDAR, which serves as a new technology, has exhibited potential in remote-sensing monitoring due to the synchronous acquisition of three-dimension point cloud and spectral information. This study confirmed the potential of multispectral LiDAR for complex urban land cover classification through three comparative methods. Firstly, the Optech Titan LiDAR point cloud was pre-processed and ground filtered. Then, three methods were analyzed: (1) Channel 1, based on Titan data to simulate the classification of a single-band LiDAR; (2) three-channel information and the digital surface model (DSM); and (3) three-channel information and DSM combined with the calculated three normalized difference vegetation indices (NDVIs) for urban land classification. A decision tree was subsequently used in classification based on the combination of intensity information, elevation information, and spectral information. The overall classification accuracies of the point cloud using the single-channel classification and the multispectral LiDAR were 64.66% and 93.82%, respectively. The results show that multispectral LiDAR has excellent potential for classifying land use in complex urban areas due to the availability of spectral information and that the addition of elevation information to the classification process could boost classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document