A Robust Detector for Automated Welding Seam Tracking System

2021 ◽  
Vol 143 (7) ◽  
Author(s):  
Yanbiao Zou ◽  
Mingquan Zhu ◽  
Xiangzhi Chen

Abstract Accurate locating of the weld seam under strong noise is the biggest challenge for automated welding. In this paper, we construct a robust seam detector on the framework of deep learning object detection algorithm. The representative object algorithm, a single shot multibox detector (SSD), is studied to establish the seam detector framework. The improved SSD is applied to seam detection. Under the SSD object detection framework, combined with the characteristics of the seam detection task, the multifeature combination network (MFCN) is proposed. The network comprehensively utilizes the local information and global information carried by the multilayer features to detect a weld seam and realizes the rapid and accurate detection of the weld seam. To solve the problem of single-frame seam image detection algorithm failure under continuous super-strong noise, the sequence image multifeature combination network (SMFCN) is proposed based on the MFCN detector. The recurrent neural network (RNN) is used to learn the temporal context information of convolutional features to accurately detect the seam under continuous super-noise. Experimental results show that the proposed seam detectors are extremely robust. The SMFCN can maintain extremely high detection accuracy under continuous super-strong noise. The welding results show that the laser vision seam tracking system using the SMFCN can ensure that the welding precision meets industrial requirements under a welding current of 150 A.

2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Cong Lin ◽  
Yongbin Zheng ◽  
Xiuchun Xiao ◽  
Jialun Lin

The workload of radiologists has dramatically increased in the context of the COVID-19 pandemic, causing misdiagnosis and missed diagnosis of diseases. The use of artificial intelligence technology can assist doctors in locating and identifying lesions in medical images. In order to improve the accuracy of disease diagnosis in medical imaging, we propose a lung disease detection neural network that is superior to the current mainstream object detection model in this paper. By combining the advantages of RepVGG block and Resblock in information fusion and information extraction, we design a backbone RRNet with few parameters and strong feature extraction capabilities. After that, we propose a structure called Information Reuse, which can solve the problem of low utilization of the original network output features by connecting the normalized features back to the network. Combining the network of RRNet and the improved RefineDet, we propose the overall network which was called CXR-RefineDet. Through a large number of experiments on the largest public lung chest radiograph detection dataset VinDr-CXR, it is found that the detection accuracy and inference speed of CXR-RefineDet have reached 0.1686 mAP and 6.8 fps, respectively, which is better than the two-stage object detection algorithm using a strong backbone like ResNet-50 and ResNet-101. In addition, the fast reasoning speed of CXR-RefineDet also provides the possibility for the actual implementation of the computer-aided diagnosis system.


2011 ◽  
Vol 55-57 ◽  
pp. 1759-1763
Author(s):  
Zhong Hu Yuan ◽  
Shuo Jun Yu ◽  
Xiao Wei Han

In the process of weld seam tracking, traditional mathematical model of classical and modern control theory is hard to meet the requirement of high performance controller. This article based on the embedded digital signal processor DSP-TMS320F2812 for the field of industrial automation control.The fuzzy control technology is applied to real-time welding seam-tracking system, according to the F2812 which has the characteristics of real-time multitasking scheduling of resources and then designed the real-time control value adjustment Fuzzy-PI control system. The designed DSP real-time fuzzy control system gives full play to powerful control and signal processing ability of F2812, it can fully adapt for the controlling requirement of super-speed and high-precision.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Author(s):  
Taewook Kim ◽  
Seungbeom Lee ◽  
Seunghwan Baek ◽  
Kwangsuck Boo

Author(s):  
Chao Liu ◽  
Hui Wang ◽  
Yu Huang ◽  
Youmin Rong ◽  
Jie Meng ◽  
...  

Abstract Mobile welding robot with adaptive seam tracking ability can greatly improve the welding efficiency and quality, which has been extensively studied. To further improve the automation in multiple station welding, a novel intelligent mobile welding robot consists of a four-wheeled mobile platform and a collaborative manipulator is developed. Under the support of simultaneous localization and mapping (SLAM) technology, the robot is capable of automatically navigating to different stations to perform welding operation. To automatically detect the welding seam, a composite sensor system including an RGB-D camera and a laser vision sensor is creatively applied. Based on the sensor system, the multi-layer sensing strategy is performed to ensure the welding seam can be detected and tracked with high precision. By applying hybrid filter to the RGB-D camera measurement, the initial welding seam could be effectively extracted. Then a novel welding start point detection method is proposed. Meanwhile, to guarantee the tracking quality, a robust welding seam tracking algorithm based on laser vision sensor is presented to eliminate the tracking discrepancy caused by the platform parking error, through which the tracking trajectory can be corrected in real-time. The experimental results show that the robot can autonomously detect and track the welding seam effectively in different station. Also, the multiple station welding efficiency can be improved and quality can also be guaranteed.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1235
Author(s):  
Yang Yang ◽  
Hongmin Deng

In order to make the classification and regression of single-stage detectors more accurate, an object detection algorithm named Global Context You-Only-Look-Once v3 (GC-YOLOv3) is proposed based on the You-Only-Look-Once (YOLO) in this paper. Firstly, a better cascading model with learnable semantic fusion between a feature extraction network and a feature pyramid network is designed to improve detection accuracy using a global context block. Secondly, the information to be retained is screened by combining three different scaling feature maps together. Finally, a global self-attention mechanism is used to highlight the useful information of feature maps while suppressing irrelevant information. Experiments show that our GC-YOLOv3 reaches a maximum of 55.5 object detection mean Average Precision (mAP)@0.5 on Common Objects in Context (COCO) 2017 test-dev and that the mAP is 5.1% higher than that of the YOLOv3 algorithm on Pascal Visual Object Classes (PASCAL VOC) 2007 test set. Therefore, experiments indicate that the proposed GC-YOLOv3 model exhibits optimal performance on the PASCAL VOC and COCO datasets.


2019 ◽  
Vol 9 (9) ◽  
pp. 1829 ◽  
Author(s):  
Jie Jiang ◽  
Hui Xu ◽  
Shichang Zhang ◽  
Yujie Fang

This study proposes a multiheaded object detection algorithm referred to as MANet. The main purpose of the study is to integrate feature layers of different scales based on the attention mechanism and to enhance contextual connections. To achieve this, we first replaced the feed-forward base network of the single-shot detector with the ResNet–101 (inspired by the Deconvolutional Single-Shot Detector) and then applied linear interpolation and the attention mechanism. The information of the feature layers at different scales was fused to improve the accuracy of target detection. The primary contributions of this study are the propositions of (a) a fusion attention mechanism, and (b) a multiheaded attention fusion method. Our final MANet detector model effectively unifies the feature information among the feature layers at different scales, thus enabling it to detect objects with different sizes and with higher precision. We used the 512 × 512 input MANet (the backbone is ResNet–101) to obtain a mean accuracy of 82.7% based on the PASCAL visual object class 2007 test. These results demonstrated that our proposed method yielded better accuracy than those provided by the conventional Single-shot detector (SSD) and other advanced detectors.


Sign in / Sign up

Export Citation Format

Share Document