A REMOTE SHARING METHOD USING MIXED REALITY FOR 3D PHYSICAL OBJECTS THAT ENABLES HIGH-SPEED POINT CLOUD SEGMENTATION AND RECEIVER'S OBJECT MANIPULATION

2020 ◽  
Vol 85 (778) ◽  
pp. 1017-1026
Author(s):  
Daichi ISHIKAWA ◽  
Tomohiro FUKUDA ◽  
Nobuyoshi YABUKI
2021 ◽  
Vol 13 (20) ◽  
pp. 4110
Author(s):  
Siping Liu ◽  
Xiaohan Tu ◽  
Cheng Xu ◽  
Lipei Chen ◽  
Shuai Lin ◽  
...  

As vital infrastructures, high-speed railways support the development of transportation. To maintain the punctuality and safety of railway systems, researchers have employed manual and computer vision methods to monitor overhead contact systems (OCSs), but they have low efficiency. Investigators have also used light detection and ranging (LiDAR) to generate point clouds by emitting laser beams. The point cloud is segmented for automatic OCS recognition, which improves recognition efficiency. However, existing LiDAR point cloud segmentation methods have high computational/model complexity and latency. In addition, they cannot adapt to embedded devices with different architectures. To overcome these issues, this article presents a lightweight neural network EffNet consisting of three modules: ExtractA, AttenA, and AttenB. ExtractA extracts the features from the disordered and irregular point clouds of an OCS. AttenA keeps information flowing in EffNet while extracting useful features. AttenB uses channel and spatialwise statistics to enhance important features and suppress unimportant ones efficiently. To further speed up EffNet and match it with diverse architectures, we optimized it with a generation framework of tensor programs and deployed it on embedded systems with different architectures. Extensive experiments demonstrated that EffNet has at least a 0.57% higher mean accuracy, but with 25.00% and 9.30% lower computational and model complexity for OCS recognition than others, respectively. The optimized EffNet can be adapted to different architectures. Its latency decreased by 51.97%, 56.47%, 63.63%, 82.58%, 85.85%, and 91.97% on the NVIDIA Nano CPU, TX2 CPU, UP Board CPU, Nano GPU, TX2 GPU, and RTX 2,080 Ti GPU, respectively.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2021 ◽  
Vol 176 ◽  
pp. 237-249
Author(s):  
Aoran Xiao ◽  
Xiaofei Yang ◽  
Shijian Lu ◽  
Dayan Guan ◽  
Jiaxing Huang

2021 ◽  
Vol 437 ◽  
pp. 227-237
Author(s):  
Hongyan Li ◽  
Zhengxing Sun ◽  
Yunjie Wu ◽  
Youcheng Song

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6387 ◽  
Author(s):  
Xiaohan Tu ◽  
Cheng Xu ◽  
Siping Liu ◽  
Shuai Lin ◽  
Lipei Chen ◽  
...  

As overhead contact (OC) is an essential part of power supply systems in high-speed railways, it is necessary to regularly inspect and repair abnormal OC components. Relative to manual inspection, applying LiDAR (light detection and ranging) to OC inspection can improve efficiency, accuracy, and safety, but it faces challenges to efficiently and effectively segment LiDAR point cloud data and identify catenary components. Recent deep learning-based recognition methods are rarely employed to recognize OC components, because they have high computational complexity, while their accuracy needs to be improved. To track these problems, we first propose a lightweight model, RobotNet, with depthwise and pointwise convolutions and an attention module to recognize the point cloud. Second, we optimize RobotNet to accelerate its recognition speed on embedded devices using an existing compilation tool. Third, we design software to facilitate the visualization of point cloud data. Our software can not only display a large amount of point cloud data, but also visualize the details of OC components. Extensive experiments demonstrate that RobotNet recognizes OC components more accurately and efficiently than others. The inference speed of the optimized RobotNet increases by an order of magnitude. RobotNet has lower computational complexity than other studies. The visualization results also show that our recognition method is effective.


2021 ◽  
Vol 13 (5) ◽  
pp. 1003
Author(s):  
Nan Luo ◽  
Hongquan Yu ◽  
Zhenfeng Huo ◽  
Jinhui Liu ◽  
Quan Wang ◽  
...  

Semantic segmentation of the sensed point cloud data plays a significant role in scene understanding and reconstruction, robot navigation, etc. This work presents a Graph Convolutional Network integrating K-Nearest Neighbor searching (KNN) and Vector of Locally Aggregated Descriptors (VLAD). KNN searching is utilized to construct the topological graph of each point and its neighbors. Then, we perform convolution on the edges of constructed graph to extract representative local features by multiple Multilayer Perceptions (MLPs). Afterwards, a trainable VLAD layer, NetVLAD, is embedded in the feature encoder to aggregate the local and global contextual features. The designed feature encoder is repeated for multiple times, and the extracted features are concatenated in a jump-connection style to strengthen the distinctiveness of features and thereby improve the segmentation. Experimental results on two datasets show that the proposed work settles the shortcoming of insufficient local feature extraction and promotes the accuracy (mIoU 60.9% and oAcc 87.4% for S3DIS) of semantic segmentation comparing to existing models.


Sign in / Sign up

Export Citation Format

Share Document