scholarly journals ROBUST DETECTION OF SURFACE ANOMALY USING LIDAR POINT CLOUD WITH INTENSITY

Author(s):  
Y. Ono ◽  
A. Tsuji ◽  
J. Abe ◽  
H. Noguchi ◽  
J. Abe

Abstract. We have developed an automatic detection method for metallic corrosion in facilities by using a LiDAR point cloud. While visual inspections for monitoring facilities are widely conducted, the inspection result depends on human skill, and there is currently a shortage of inspectors. While automatic detection methods using an RGB image have been developed, such methods cannot be applied to inspections at night. Therefore, we propose a robust detection method that utilizes both 3D shapes and intensities in a LiDAR point cloud instead of RGB information. The proposed method segments the point cloud into a basic building material by using the 3D shape and then recognizes a point cloud with an abnormal intensity in each material as the corrosion area. We demonstrate through experiments that the proposed method can robustly detect corrosion spots in aging facilities during detection conducted both during the day and at night.

Author(s):  
Jun Han ◽  
Guodong Chen ◽  
Tao Liu ◽  
Qian Yang

Due to the deformation of the tunnel and the abnormal outburst of internal facilities, the existing railway tunnel line needs to be inspected regularly. However, the existing detection methods have some shortcomings, such as large measurement interference, low efficiency, discontinuity of section, and independence with the track structure. Therefore, an automatic detection method of tunnel space clearance based on point cloud data is proposed. By fitting the central axis of the tunnel, the extraction can be realized at any position of the tunnel. The coordinate system of tunnel gauge detection based on rail top surface is established, and different types of tunnel gauge frames are introduced. The improved ray algorithm method is used to realize automatic detection and analysis of various tunnel types. Field experiments on existing railway tunnels show that the method can accurately obtain the limit point and size of the tunnel. The cross-section of transgression is obtained. It can meet the requirements of tunnel detection accuracy and has great practicability in tunnel disease detection.


Author(s):  
Yutong Feng ◽  
Yifan Feng ◽  
Haoxuan You ◽  
Xibin Zhao ◽  
Yue Gao

Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 806
Author(s):  
Seong Hyun Kim ◽  
Ju Yong Chang

Although the performance of the 3D human shape reconstruction method has improved considerably in recent years, most methods focus on a single person, reconstruct a root-relative 3D shape, and rely on ground-truth information about the absolute depth to convert the reconstruction result to the camera coordinate system. In this paper, we propose an end-to-end learning-based model for single-shot, 3D, multi-person shape reconstruction in the camera coordinate system from a single RGB image. Our network produces output tensors divided into grid cells to reconstruct the 3D shapes of multiple persons in a single-shot manner, where each grid cell contains information about the subject. Moreover, our network predicts the absolute position of the root joint while reconstructing the root-relative 3D shape, which enables reconstructing the 3D shapes of multiple persons in the camera coordinate system. The proposed network can be learned in an end-to-end manner and process images at about 37 fps to perform the 3D multi-person shape reconstruction task in real time.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110555
Author(s):  
Jie Wang ◽  
Shuxiao Li

Accurately detecting the appropriate grasp configurations is the central task for the robot to grasp an object. Existing grasp detection methods usually overlook the depth image or only regard it as a two-dimensional distance image, which makes it difficult to capture the three-dimensional structural characteristics of target object. In this article, we transform the depth image to point cloud and propose a two-stage grasp detection method based on candidate grasp detection from RGB image and spatial feature rescoring from point cloud. Specifically, we first adopt the recently proposed high-performance rotation object detection method for aerial images, named R3Det, to grasp detection task, obtaining the candidate grasp boxes and their appearance scores. Then, point clouds within each candidate grasp box are normalized and evaluated to get the point cloud quality scores, which are fused with the established point cloud quantity scoring model to obtain spatial scores. Finally, appearance scores and their corresponding spatial scores are combined to output high-quality grasp detection results. The proposed method effectively fuses three types of grasp scoring modules, thus is called Score Fusion Grasp Net. Besides, we propose and adopt top-k grasp metric to effectively reflect the success rate of algorithm in actual grasp execution. Score Fusion Grasp Net obtains 98.5% image-wise accuracy and 98.1% object-wise accuracy on Cornell Grasp Dataset, which exceeds the performances of state-of-the-art methods. We also use the robotic arm to conduct physical grasp experiments on 15 kinds of household objects and 11 kinds of adversarial objects. The results show that the proposed method still has a high success rate when facing new objects.


2019 ◽  
Author(s):  
Jinxiong Zhao ◽  
Bo Zhao ◽  
Yanbin Zhang ◽  
Zhiru Li ◽  
Hui Yuan ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2021 ◽  
Vol 13 (14) ◽  
pp. 2770
Author(s):  
Shengjing Tian ◽  
Xiuping Liu ◽  
Meng Liu ◽  
Yuhao Bian ◽  
Junbin Gao ◽  
...  

Object tracking from LiDAR point clouds, which are always incomplete, sparse, and unstructured, plays a crucial role in urban navigation. Some existing methods utilize a learned similarity network for locating the target, immensely limiting the advancements in tracking accuracy. In this study, we leveraged a powerful target discriminator and an accurate state estimator to robustly track target objects in challenging point cloud scenarios. Considering the complex nature of estimating the state, we extended the traditional Lucas and Kanade (LK) algorithm to 3D point cloud tracking. Specifically, we propose a state estimation subnetwork that aims to learn the incremental warp for updating the coarse target state. Moreover, to obtain a coarse state, we present a simple yet efficient discrimination subnetwork. It can project 3D shapes into a more discriminatory latent space by integrating the global feature into each point-wise feature. Experiments on KITTI and PandaSet datasets showed that compared with the most advanced of other methods, our proposed method can achieve significant improvements—in particular, up to 13.68% on KITTI.


Sign in / Sign up

Export Citation Format

Share Document