scholarly journals Research on Active Intelligent Perception Technology of Vessel Situation Based on Multisensor Fusion

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Ruixin Ma ◽  
Yong Yin ◽  
Zilong Li ◽  
Jing Chen ◽  
Kexin Bao

In this paper, we focus on the safety supervision of inland vessels. This paper especially aims at studying the vessel target detection and dynamic tracking algorithm based on computer vision and the target fusion algorithm based on multisensor. For the vessel video target detection and tracking, this paper analyzes the current widely used methods and theories. Additionally, facing the application scenarios and characteristics of inland vessels, a comprehensive vessel video target detection algorithm is proposed in this paper. It is combined with a three-frame difference method based on Canny edge detection and a background subtraction method based on mixed Gaussian background modeling. Besides, for the multisensor target fusion, the processing method of laser point cloud data and automatic identification system (AIS) data is analyzed in this paper. Based on the idea of fuzzy mathematics, this paper proposes a method for calculating the fuzzy correlation matrix with normal membership function, which realizes the fusion of vessel track features of laser point cloud data and AIS data under dynamic video correction. Finally, through this method, a set of vessel situation active intelligent perception systems based on multisensor fusion was developed. Experiments show that this method has better environmental applicability and detection accuracy than traditional manual detection and any single monitoring method.

2021 ◽  
Vol 861 (2) ◽  
pp. 022048
Author(s):  
Yangjun Long ◽  
Qian Huang ◽  
Faquan Wu ◽  
Hongqing Yi ◽  
Shenggong Guan ◽  
...  

2010 ◽  
Author(s):  
Li-Der Chang ◽  
K. Clint Slatton ◽  
Vivek Anand ◽  
Pang-Wei Liu ◽  
Heezin Lee ◽  
...  

Author(s):  
Jun Han ◽  
Guodong Chen ◽  
Tao Liu ◽  
Qian Yang

Due to the deformation of the tunnel and the abnormal outburst of internal facilities, the existing railway tunnel line needs to be inspected regularly. However, the existing detection methods have some shortcomings, such as large measurement interference, low efficiency, discontinuity of section, and independence with the track structure. Therefore, an automatic detection method of tunnel space clearance based on point cloud data is proposed. By fitting the central axis of the tunnel, the extraction can be realized at any position of the tunnel. The coordinate system of tunnel gauge detection based on rail top surface is established, and different types of tunnel gauge frames are introduced. The improved ray algorithm method is used to realize automatic detection and analysis of various tunnel types. Field experiments on existing railway tunnels show that the method can accurately obtain the limit point and size of the tunnel. The cross-section of transgression is obtained. It can meet the requirements of tunnel detection accuracy and has great practicability in tunnel disease detection.


2021 ◽  
Vol 38 (2) ◽  
pp. 315-320
Author(s):  
Fuchun Jiang ◽  
Hongyi Zhang ◽  
Chen Zhu

The current three-dimensional (3D) target detection model has a low accuracy, because the surface information of the target can only be partially represented by its two-dimensional (2D) image detector. To solve the problem, this paper studies the 3D target detection in the RGB-D data of indoor scenes, and modifies the frustum PointNet (F-PointNet), a model superior in point cloud data processing, to detect indoor targets like sofa, chair, and bed. The 2D image detector of F-PointNet was replaced with you only look once (YOLO) v3 and faster region-based convolutional neural network (R-CNN) respectively. Then, the F-PointNet models with the two 2D image detectors were compared on SUN RGB-D dataset. The results show that the model with YOLO v3 did better in target detection, with a clear advantage in mean average precision (>6.27).


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

Sign in / Sign up

Export Citation Format

Share Document