scholarly journals Nondestructive Determination of Nitrogen, Phosphorus and Potassium Contents in Greenhouse Tomato Plants Based on Multispectral Three-Dimensional Imaging

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5295 ◽  
Author(s):  
Guoxiang Sun ◽  
Yongqian Ding ◽  
Xiaochan Wang ◽  
Wei Lu ◽  
Ye Sun ◽  
...  

Measurement of plant nitrogen (N), phosphorus (P), and potassium (K) levels are important for determining precise fertilization management approaches for crops cultivated in greenhouses. To accurately, rapidly, stably, and nondestructively measure the NPK levels in tomato plants, a nondestructive determination method based on multispectral three-dimensional (3D) imaging was proposed. Multiview RGB-D images and multispectral images were synchronously collected, and the plant multispectral reflectance was registered to the depth coordinates according to Fourier transform principles. Based on the Kinect sensor pose estimation and self-calibration, the unified transformation of the multiview point cloud coordinate system was realized. Finally, the iterative closest point (ICP) algorithm was used for the precise registration of multiview point clouds and the reconstruction of plant multispectral 3D point cloud models. Using the normalized grayscale similarity coefficient, the degree of spectral overlap, and the Hausdorff distance set, the accuracy of the reconstructed multispectral 3D point clouds was quantitatively evaluated, the average value was 0.9116, 0.9343 and 0.41 cm, respectively. The results indicated that the multispectral reflectance could be registered to the Kinect depth coordinates accurately based on the Fourier transform principles, the reconstruction accuracy of the multispectral 3D point cloud model met the model reconstruction needs of tomato plants. Using back-propagation artificial neural network (BPANN), support vector machine regression (SVMR), and gaussian process regression (GPR) methods, determination models for the NPK contents in tomato plants based on the reflectance characteristics of plant multispectral 3D point cloud models were separately constructed. The relative error (RE) of the N content by BPANN, SVMR and GPR prediction models were 2.27%, 7.46% and 4.03%, respectively. The RE of the P content by BPANN, SVMR and GPR prediction models were 3.32%, 8.92% and 8.41%, respectively. The RE of the K content by BPANN, SVMR and GPR prediction models were 3.27%, 5.73% and 3.32%, respectively. These models provided highly efficient and accurate measurements of the NPK contents in tomato plants. The NPK contents determination performance of these models were more stable than those of single-view models.

2021 ◽  
Author(s):  
Ernest Berney ◽  
Naveen Ganesh ◽  
Andrew Ward ◽  
J. Newman ◽  
John Rushing

The ability to remotely assess road and airfield pavement condition is critical to dynamic basing, contingency deployment, convoy entry and sustainment, and post-attack reconnaissance. Current Army processes to evaluate surface condition are time-consuming and require Soldier presence. Recent developments in the area of photogrammetry and light detection and ranging (LiDAR) enable rapid generation of three-dimensional point cloud models of the pavement surface. Point clouds were generated from data collected on a series of asphalt, concrete, and unsurfaced pavements using ground- and aerial-based sensors. ERDC-developed algorithms automatically discretize the pavement surface into cross- and grid-based sections to identify physical surface distresses such as depressions, ruts, and cracks. Depressions can be sized from the point-to-point distances bounding each depression, and surface roughness is determined based on the point heights along a given cross section. Noted distresses are exported to a distress map file containing only the distress points and their locations for later visualization and quality control along with classification and quantification. Further research and automation into point cloud analysis is ongoing with the goal of enabling Soldiers with limited training the capability to rapidly assess pavement surface condition from a remote platform.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7558
Author(s):  
Linyan Cui ◽  
Guolong Zhang ◽  
Jinshen Wang

For the engineering application of manipulator grasping objects, mechanical arm occlusion and limited imaging angle produce various holes in the reconstructed 3D point clouds of objects. Acquiring a complete point cloud model of the grasped object plays a very important role in the subsequent task planning of the manipulator. This paper proposes a method with which to automatically detect and repair the holes in the 3D point cloud model of symmetrical objects grasped by the manipulator. With the established virtual camera coordinate system and boundary detection, repair and classification of holes, the closed boundaries for the nested holes were detected and classified into two kinds, which correspond to the mechanical claw holes caused by mechanical arm occlusion and the missing surface produced by limited imaging angle. These two kinds of holes were repaired based on surface reconstruction and object symmetry. Experiments on simulated and real point cloud models demonstrate that our approach outperforms the other state-of-the-art 3D point cloud hole repair algorithms.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Seoungjae Cho ◽  
Jonghyun Kim ◽  
Warda Ikram ◽  
Kyungeun Cho ◽  
Young-Sik Jeong ◽  
...  

A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.


Buildings ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 70 ◽  
Author(s):  
Hadi Mahami ◽  
Farnad Nasirzadeh ◽  
Ali Hosseininaveh Ahmadabadian ◽  
Saeid Nahavandi

This research presents a novel method for automated construction progress monitoring. Using the proposed method, an accurate and complete 3D point cloud is generated for automatic outdoor and indoor progress monitoring throughout the project duration. In this method, Structured-from-Motion (SFM) and Multi-View-Stereo (MVS) algorithms coupled with photogrammetric principles for the coded targets’ detection are exploited to generate as-built 3D point clouds. The coded targets are utilized to automatically resolve the scale and increase the accuracy of the point cloud generated using SFM and MVS methods. Having generated the point cloud, the CAD model is generated from the as-built point cloud and compared with the as-planned model. Finally, the quantity of the performed work is determined in two real case study projects. The proposed method is compared to the Structured-from-Motion (SFM)/Clustering Multi-Views Stereo (CMVS)/Patch-based Multi-View Stereo (PMVS) algorithm, as a common method for generating 3D point cloud models. The proposed photogrammetric Multi-View Stereo method reveals an accuracy of around 99 percent and the generated noises are less compared to the SFM/CMVS/PMVS algorithm. It is observed that the proposed method has extensively improved the accuracy of generated points cloud compared to the SFM/CMVS/PMVS algorithm. It is believed that the proposed method may present a novel and robust tool for automated progress monitoring in construction projects.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7235
Author(s):  
Jongdae Baek

The hyperloop transportation system has emerged as an innovative next-generation transportation system. In this system, a capsule-type vehicle inside a sealed near-vacuum tube moves at 1000 km/h or more. Not only must this transport tube span over long distances, but it must be clear of potential hazards to vehicles traveling at high speeds inside the tube. Therefore, an automated infrastructure anomaly detection system is essential. This study sought to confirm the applicability of advanced sensing technology such as Light Detection and Ranging (LiDAR) in the automatic anomaly detection of next-generation transportation infrastructure such as hyperloops. To this end, a prototype two-dimensional LiDAR sensor was constructed and used to generate three-dimensional (3D) point cloud models of a tube facility. A technique for detecting abnormal conditions or obstacles in the facility was used, which involved comparing the models and determining the changes. The design and development process of the 3D safety monitoring system using 3D point cloud models and the analytical results of experimental data using this system are presented. The tests on the developed system demonstrated that anomalies such as a 25 mm change in position were accurately detected. Thus, we confirm the applicability of the developed system in next-generation transportation infrastructure.


Author(s):  
M. Zaboli ◽  
H. Rastiveis ◽  
A. Shams ◽  
B. Hosseiny ◽  
W. A. Sarasua

Abstract. Automated analysis of three-dimensional (3D) point clouds has become a boon in Photogrammetry, Remote Sensing, Computer Vision, and Robotics. The aim of this paper is to compare classifying algorithms tested on an urban area point cloud acquired by a Mobile Terrestrial Laser Scanning (MTLS) system. The algorithms were tested based on local geometrical and radiometric descriptors. In this study, local descriptors such as linearity, planarity, intensity, etc. are initially extracted for each point by observing their neighbor points. These features are then imported to a classification algorithm to automatically label each point. Here, five powerful classification algorithms including k-Nearest Neighbors (k-NN), Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Multilayer Perceptron (MLP) Neural Network, and Random Forest (RF) are tested. Eight semantic classes are considered for each method in an equal condition. The best overall accuracy of 90% was achieved with the RF algorithm. The results proved the reliability of the applied descriptors and RF classifier for MTLS point cloud classification.


Author(s):  
Y. Ding ◽  
X. Zheng ◽  
H. Xiong ◽  
Y. Zhang

<p><strong>Abstract.</strong> With the rapid development of new indoor sensors and acquisition techniques, the amount of indoor three dimensional (3D) point cloud models was significantly increased. However, these massive “blind” point clouds are difficult to satisfy the demand of many location-based indoor applications and GIS analysis. The robust semantic segmentation of 3D point clouds remains a challenge. In this paper, a segmentation with layout estimation network (SLENet)-based 2D&amp;ndash;3D semantic transfer method is proposed for robust segmentation of image-based indoor 3D point clouds. Firstly, a SLENet is devised to simultaneously achieve the semantic labels and indoor spatial layout estimation from 2D images. A pixel labeling pool is then constructed to incorporate the visual graphical model to realize the efficient 2D&amp;ndash;3D semantic transfer for 3D point clouds, which avoids the time-consuming pixel-wise label transfer and the reprojection error. Finally, a 3D-contextual refinement, which explores the extra-image consistency with 3D constraints is developed to suppress the labeling contradiction caused by multi-superpixel aggregation. The experiments were conducted on an open dataset (NYUDv2 indoor dataset) and a local dataset. In comparison with the state-of-the-art methods in terms of 2D semantic segmentation, SLENet can both learn discriminative enough features for inter-class segmentation while preserving clear boundaries for intra-class segmentation. Based on the excellence of SLENet, the final 3D semantic segmentation tested on the point cloud created from the local image dataset can reach a total accuracy of 89.97%, with the object semantics and indoor structural information both expressed.</p>


2021 ◽  
Vol 13 (8) ◽  
pp. 1565
Author(s):  
Jeonghoon Kwak ◽  
Yunsick Sung

Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual environment. The use of a traditional encoder-decoder model, such as DeepLabV3, improves the quality of the low-density 3D point clouds of human objects, where the quality is determined by the measurement gap of the LiDAR lasers. However, whenever a human object with a surrounding environment in a 3D point cloud is used by the traditional encoder-decoder model, it is difficult to increase the density fitting of the human object. This paper proposes a DeepLabV3-Refiner model, which is a model that refines the fit of human objects using human objects whose density has been increased through DeepLabV3. An RGB image that has a segmented human object is defined as a dense segmented image. DeepLabV3 is used to make predictions of dense segmented images and 3D point clouds for human objects in 3D point clouds. In the Refiner model, the results of DeepLabV3 are refined to fit human objects, and a dense segmented image fit to human objects is predicted. The dense 3D point cloud is calculated using the dense segmented image provided by the DeepLabV3-Refiner model. The 3D point clouds that were analyzed by the DeepLabV3-Refiner model had a 4-fold increase in density, which was verified experimentally. The proposed method had a 0.6% increase in density accuracy compared to that of DeepLabV3, and a 2.8-fold increase in the density corresponding to the human object. The proposed method was able to provide a 3D point cloud that increased the density to fit the human object. The proposed method can be used to provide an accurate 3D virtual environment by using the improved 3D point clouds.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4423 ◽  
Author(s):  
Hu ◽  
Yang ◽  
Li

Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU(Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient for autonomous vehicles to perform safe and efficient self-driving operations. This study proposes an improved edge-oriented segmentation-based method to detect the objects from the sensed three-dimensional (3D) point cloud. The improved edge-oriented segmentation-based method consists of three main steps: First, the bounding areas of objects are identified by edge detection and stixel estimation in corresponding two-dimensional (2D) images taken by a stereo camera. Second, 3D sparse point clouds of objects are reconstructed in bounding areas. Finally, the dense point clouds of objects are segmented by matching the 3D sparse point clouds of objects with the whole scene point cloud. After comparison with the existing methods of segmentation, the experimental results demonstrate that the proposed edge-oriented segmentation method improves the precision of 3D point cloud segmentation, and that the objects can be segmented accurately. Meanwhile, the visualization of output data in advanced driving assistance systems (ADAS) can be greatly facilitated due to the decrease in computational time and the decrease in the number of points in the object’s point cloud.


Author(s):  
M. Kawato ◽  
L. Li ◽  
K. Hasegawa ◽  
M. Adachi ◽  
H. Yamaguchi ◽  
...  

Abstract. Three-dimensional point clouds are becoming popular representations for digital archives of cultural heritage sites. The Borobudur Temple, located in Central Java, Indonesia, was built in the 8th century. Borobudur is considered one of the greatest Buddhist monuments in the world and was listed as a UNESCO World Heritage site. We are developing a virtual reality system as a digital archive of the Borobudur Temple. This research is a collaboration between Ritsumeikan University, Japan, the Indonesian Institute of Sciences (LIPI), and the Borobudur Conservation Office, Indonesia. In our VR system, the following three data sources are integrated to form a 3D point cloud: (1) a 3D point cloud of the overall shape of the temple acquired by photogrammetry using a camera carried by a UAV, (2) a 3D point cloud obtained from precise photogrammetric measurements of selected parts of the temple building, and (3) 3D data of the hidden relief panels recovered from the archived 2D monocular photos using deep learning. Our VR system supports both the first-person view and the bird’s eye view. The first-person view allows immersive observation and appreciation of the cultural heritage. The bird’s eye view is useful for understanding the whole picture. A user can easily switch between the two views by using a user-friendly VR user interface constructed by a 3D game engine.


Sign in / Sign up

Export Citation Format

Share Document