scholarly journals Semantic segmentation of large-scale point clouds based on dilated nearest neighbors graph

Author(s):  
Lei Wang ◽  
Jiaji Wu ◽  
Xunyu Liu ◽  
Xiaoliang Ma ◽  
Jun Cheng

AbstractThree-dimensional (3D) semantic segmentation of point clouds is important in many scenarios, such as automatic driving, robotic navigation, while edge computing is indispensable in the devices. Deep learning methods based on point sampling prove to be computation and memory efficient to tackle large-scale point clouds (e.g. millions of points). However, some local features may be abandoned while sampling. In this paper, We present one end-to-end 3D semantic segmentation framework based on dilated nearest neighbor encoding. Instead of down-sampling point cloud directly, we propose a dilated nearest neighbor encoding module to broaden the network’s receptive field to learn more 3D geometric information. Without increase of network parameters, our method is computation and memory efficient for large-scale point clouds. We have evaluated the dilated nearest neighbor encoding in two different networks. The first is the random sampling with local feature aggregation. The second is the Point Transformer. We have evaluated the quality of the semantic segmentation on the benchmark 3D dataset S3DIS, and demonstrate that the proposed dilated nearest neighbor encoding exhibited stable advantages over baseline and competing methods.

2022 ◽  
Vol 193 ◽  
pp. 106653
Author(s):  
Hejun Wei ◽  
Enyong Xu ◽  
Jinlai Zhang ◽  
Yanmei Meng ◽  
Jin Wei ◽  
...  

2019 ◽  
Vol 8 (9) ◽  
pp. 425
Author(s):  
Weite Li ◽  
Kenya Shigeta ◽  
Kyoko Hasegawa ◽  
Liang Li ◽  
Keiji Yano ◽  
...  

In this paper, we propose a method to visualize large-scale colliding point clouds by highlighting their collision areas, and apply the method to visualization of collision simulation. Our method uses our recent work that achieved precise three-dimensional see-through imaging, i.e., transparent visualization, of large-scale point clouds that were acquired via laser scanning of three-dimensional objects. We apply the proposed collision visualization method to two applications: (1) The revival of the festival float procession of the Gion Festival, Kyoto city, Japan. The city government plans to revive the original procession route, which is narrow and not used at present. For the revival, it is important to know whether the festival floats would collide with houses, billboards, electric wires, or other objects along the original route. (2) Plant simulations based on laser-scanned datasets of existing and new facilities. The advantageous features of our method are the following: (1) A transparent visualization with a correct depth feel that is helpful to robustly determine the collision areas; (2) the ability to visualize high collision risk areas and real collision areas; and (3) the ability to highlight target visualized areas by increasing the corresponding point densities.


Author(s):  
Kunping Yan ◽  
Qingyong Hu ◽  
Hanyun Wang ◽  
Xiaohong Huang ◽  
Li Li ◽  
...  

2019 ◽  
Vol 8 (8) ◽  
pp. 343 ◽  
Author(s):  
Li ◽  
Hasegawa ◽  
Nii ◽  
Tanaka

Digital archiving of three-dimensional cultural heritage assets has increased the demand for visualization of large-scale point clouds of cultural heritage assets acquired by laser scanning. We proposed a fused transparent visualization method that visualizes a point cloud of a cultural heritage asset in an environment using a photographic image as the background. We also proposed lightness adjustment and color enhancement methods to deal with the reduced visibility caused by the fused visualization. We applied the proposed method to a laser-scanned point cloud of a high-valued cultural festival float with complex inner and outer structures. Experimental results demonstrate that the proposed method enables high-quality transparent visualization of the cultural asset in its surrounding environment.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2571 ◽  
Author(s):  
Jongmin Jeong ◽  
Tae Yoon ◽  
Jin Park

Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.


Author(s):  
Qingyong Hu ◽  
Bo Yang ◽  
Linhai Xie ◽  
Stefano Rosa ◽  
Yulan Guo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document