scholarly journals Multispectral Mapping on 3D Models and Multi-Temporal Monitoring for Individual Characterization of Olive Trees

2020 ◽  
Vol 12 (7) ◽  
pp. 1106 ◽  
Author(s):  
J. M. Jurado ◽  
L. Ortega ◽  
J. J. Cubillas ◽  
F. R. Feito

3D plant structure observation and characterization to get a comprehensive knowledge about the plant status still poses a challenge in Precision Agriculture (PA). The complex branching and self-hidden geometry in the plant canopy are some of the existing problems for the 3D reconstruction of vegetation. In this paper, we propose a novel application for the fusion of multispectral images and high-resolution point clouds of an olive orchard. Our methodology is based on a multi-temporal approach to study the evolution of olive trees. This process is fully automated and no human intervention is required to characterize the point cloud with the reflectance captured by multiple multispectral images. The main objective of this work is twofold: (1) the multispectral image mapping on a high-resolution point cloud and (2) the multi-temporal analysis of morphological and spectral traits in two flight campaigns. Initially, the study area is modeled by taking multiple overlapping RGB images with a high-resolution camera from an unmanned aerial vehicle (UAV). In addition, a UAV-based multispectral sensor is used to capture the reflectance for some narrow-bands (green, near-infrared, red, and red-edge). Then, the RGB point cloud with a high detailed geometry of olive trees is enriched by mapping the reflectance maps, which are generated for every multispectral image. Therefore, each 3D point is related to its corresponding pixel of the multispectral image, in which it is visible. As a result, the 3D models of olive trees are characterized by the observed reflectance in the plant canopy. These reflectance values are also combined to calculate several vegetation indices (NDVI, RVI, GRVI, and NDRE). According to the spectral and spatial relationships in the olive plantation, segmentation of individual olive trees is performed. On the one hand, plant morphology is studied by a voxel-based decomposition of its 3D structure to estimate the height and volume. On the other hand, the plant health is studied by the detection of meaningful spectral traits of olive trees. Moreover, the proposed methodology also allows the processing of multi-temporal data to study the variability of the studied features. Consequently, some relevant changes are detected and the development of each olive tree is analyzed by a visual-based and statistical approach. The interactive visualization and analysis of the enriched 3D plant structure with different spectral layers is an innovative method to inspect the plant health and ensure adequate plantation sustainability.

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2244
Author(s):  
J. M. Jurado ◽  
J. L. Cárdenas ◽  
C. J. Ogayar ◽  
L. Ortega ◽  
F. R. Feito

The characterization of natural spaces by the precise observation of their material properties is highly demanded in remote sensing and computer vision. The production of novel sensors enables the collection of heterogeneous data to get a comprehensive knowledge of the living and non-living entities in the ecosystem. The high resolution of consumer-grade RGB cameras is frequently used for the geometric reconstruction of many types of environments. Nevertheless, the understanding of natural spaces is still challenging. The automatic segmentation of homogeneous materials in nature is a complex task because there are many overlapping structures and an indirect illumination, so the object recognition is difficult. In this paper, we propose a method based on fusing spatial and multispectral characteristics for the unsupervised classification of natural materials in a point cloud. A high-resolution camera and a multispectral sensor are mounted on a custom camera rig in order to simultaneously capture RGB and multispectral images. Our method is tested in a controlled scenario, where different natural objects coexist. Initially, the input RGB images are processed to generate a point cloud by applying the structure-from-motion (SfM) algorithm. Then, the multispectral images are mapped on the three-dimensional model to characterize the geometry with the reflectance captured from four narrow bands (green, red, red-edge and near-infrared). The reflectance, the visible colour and the spatial component are combined to extract key differences among all existing materials. For this purpose, a hierarchical cluster analysis is applied to pool the point cloud and identify the feature pattern for every material. As a result, the tree trunk, the leaves, different species of low plants, the ground and rocks can be clearly recognized in the scene. These results demonstrate the feasibility to perform a semantic segmentation by considering multispectral and spatial features with an unknown number of clusters to be detected on the point cloud. Moreover, our solution is compared to other method based on supervised learning in order to test the improvement of the proposed approach.


Author(s):  
F. Dadras Javan ◽  
M. Savadkouhi

Abstract. In the last few years, Unmanned Aerial Vehicles (UAVs) are being frequently used to acquire high resolution photogrammetric images and consequently producing Digital Surface Models (DSMs) and orthophotos in a photogrammetric procedure for topography and surface processing applications. Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution. But yet, thermal mapping is getting more important in civil applications, as thermal sensors can be used in condition that visible sensors cannot, such as foggy weather and night times which is not possible for visible cameras. But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with. This study aims to offer a solution for to fixing mentioned problem and generating a thermal 3D model with higher spatial resolution based on thermal and visible point clouds integration. This integration leads to generate a more accurate thermal point cloud and DEM with more density and resolution which is appropriate for 3D thermal modelling. The main steps of this study are: generating thermal and RGB point clouds separately, registration of them in two course and fine level and finally adding thermal information to RGB high resolution point cloud by interpolation concept. Experimental results are presented in a mesh that has more faces (With a factor of 23) which leads to a higher resolution textured mesh with thermal information.


2021 ◽  
Vol 974 (8) ◽  
pp. 36-44
Author(s):  
R.V. Permyakov

Stereopairs of very-high resolution satellite imagery constitute one of the key high-accurate data sources on heights. A stereophotogrammetric technique is a key method of processing these data. Despite that a number of spacecrafts gathering very-high-resolution imagery in a stereo mode constantly increases, the area of the Earth regularly covered by such data and stored in the archives of RSD operators remains relatively small and, as a rule, is limited only to large urban agglomerations. The new collection may not suit the customer for several reasons. Firstly, the materials of the new stereo collection are more expensive than those of the archived one. Secondly, due to unfavourable weather conditions and a busy schedule of satellites, the completion of the new collection may go beyond the deadline specified by the customer. Well known and brand-new criteria to form multi-temporal, stereopairs are analyzed. The specific of photogrammetric processing multi-temporal stereopairs is demonstrated. Application of multi-temporal stereopairs is described. In conclusion it is confirmed that 3D-models and high accurate DTMs can be generated basing on stereo models from multi-temporal satellite imagery in the absence of the following data


2018 ◽  
Vol 7 (8) ◽  
pp. 315 ◽  
Author(s):  
Rossana Gini ◽  
Giovanna Sona ◽  
Giulia Ronchetti ◽  
Daniele Passoni ◽  
Livio Pinto

This paper focuses on the use of ultra-high resolution Unmanned Aircraft Systems (UAS) imagery to classify tree species. Multispectral surveys were performed on a plant nursery to produce Digital Surface Models and orthophotos with ground sample distance equal to 0.01 m. Different combinations of multispectral images, multi-temporal data, and texture measures were employed to improve classification. The Grey Level Co-occurrence Matrix was used to generate texture images with different window sizes and procedures for optimal texture features and window size selection were investigated. The study evaluates how methods used in Remote Sensing could be applied on ultra-high resolution UAS images. Combinations of original and derived bands were classified with the Maximum Likelihood algorithm, and Principal Component Analysis was conducted in order to understand the correlation between bands. The study proves that the use of texture features produces a significant increase of the Overall Accuracy, whose values change from 58% to 78% or 87%, depending on components reduction. The improvement given by the introduction of texture measures is highlighted even in terms of User’s and Producer’s Accuracy. For classification purposes, the inclusion of texture can compensate for difficulties of performing multi-temporal surveys.


Author(s):  
Asma Abdolahpoor ◽  
Peyman Kabiri

Image fusion is an important concept in remote sensing. Earth observation satellites provide both high-resolution panchromatic and low-resolution multispectral images. Pansharpening is aimed on fusion of a low-resolution multispectral image with a high-resolution panchromatic image. Because of this fusion, a multispectral image with high spatial and spectral resolution is generated. This paper reports a new method to improve spatial resolution of the final multispectral image. The reported work proposes an image fusion method using wavelet packet transform (WPT) and principal component analysis (PCA) methods based on the textures of the panchromatic image. Initially, adaptive PCA (APCA) is applied to both multispectral and panchromatic images. Consequently, WPT is used to decompose the first principal component of multispectral and panchromatic images. Using WPT, high frequency details of both panchromatic and multispectral images are extracted. In areas with similar texture, extracted spatial details from the panchromatic image are injected into the multispectral image. Experimental results show that the proposed method can provide promising results in fusing multispectral images with high-spatial resolution panchromatic image. Moreover, results show that the proposed method can successfully improve spectral features of the multispectral image.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Author(s):  
Dioline Sara ◽  
Ajay Kumar Mandava ◽  
Arun Kumar ◽  
Shiny Duela ◽  
Anitha Jude

Sign in / Sign up

Export Citation Format

Share Document