Towards Next Best View Planning for Time-Variant Scenes

Author(s):  
Embla Morast ◽  
Patric Jensfelt
Keyword(s):  
2020 ◽  
Vol 53 (2) ◽  
pp. 15501-15507
Author(s):  
Guillaume Hardouin ◽  
Fabio Morbidi ◽  
Julien Moras ◽  
Julien Marzat ◽  
El Mustapha Mouaddib

2020 ◽  
Vol 12 (13) ◽  
pp. 2169 ◽  
Author(s):  
Samuel Arce ◽  
Cory A. Vernon ◽  
Joshua Hammond ◽  
Valerie Newell ◽  
Joseph Janson ◽  
...  

Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve convergence to a specified orthomosaic resolution by identifying edges in the point cloud and planning cameras that “view” the holes identified by edges without requiring an initial model. This iterative UAV photogrammetric method successfully runs in various Microsoft AirSim environments. Simulated ground sampling distance (GSD) of models reaches as low as 3.4 cm per pixel, and generally, successive iterations improve resolution. Besides analogous application in simulated environments, a field study of a retired municipal water tank illustrates the practical application and advantages of automated UAV iterative inspection of infrastructure using 63 % fewer photographs than a comparable manual flight with analogous density point clouds obtaining a GSD of less than 3 cm per pixel. Each iteration qualitatively increases resolution according to a logarithmic regression, reduces holes in models, and adds details to model edges.


Author(s):  
Liangzhi Li ◽  
Nanfeng Xiao

Purpose – This paper aims to propose a new view planning method which can be used to calculate the next-best-view (NBV) for multiple manipulators simultaneously and build an automated three-dimensional (3D) object reconstruction system, which is based on the proposed method and can adapt to various industrial applications. Design/methodology/approach – The entire 3D space is encoded with octree, which marks the voxels with different tags. A set of candidate viewpoints is generated, filtered and evaluated. The viewpoint with the highest score is selected as the NBV. Findings – The proposed method is able to make the multiple manipulators, equipped with “eye-in-hand” RGB-D sensors, work together to accelerate the object reconstruction process. Originality/value – Compared to the existed approaches, the proposed method in this paper is fast, computationally efficient, has low memory cost and can be used in actual industrial productions where the multiple different manipulators exist. And, more notably, a new algorithm is designed to speed up the generation and filtration of the candidate viewpoints, which can guarantee both speed and quality.


Sign in / Sign up

Export Citation Format

Share Document