scholarly journals Real-time recognition of weld defects based on visible spectral image and machine learning

2022 ◽  
Vol 355 ◽  
pp. 03014
Author(s):  
Sujie Zhang ◽  
Ming Deng ◽  
Xiaoyuan Xie

The quality of Tungsten Inert Gas welding is dependent on human supervision, which can’t suitable for automation. This study designed a model for assessing the tungsten inert gas welding quality with the potential of application in real-time. The model used the K-Nearest Neighborhood (KNN) algorithm, paired with images in the visible spectrum formed by high dynamic range camera. Firstly, projecting the image of weld defects in the training set into a two-dimensional space using multidimensional scaling (MDS), so similar weld defects was aggregated into blocks and distributed in hash, and among different weld defects has overlap. Secondly, establishing models including the KNN, CNN, SVM, CART and NB classification, to classify and recognize the weld defect images. The results show that the KNN model is the best, which has the recognition accuracy of 98%, and the average time of recognizing a single image of 33ms, and suitable for common hardware devices. It can be applied to the image recognition system of automatic welding robot to improve the intelligent level of welding robot.

Author(s):  
Naoya Wada ◽  
Shingo Yoshizawa ◽  
Yoshikazu Miyanaga

This paper introduces the extraction of speech features realizing noise robustness for speech recognition. It also explores advanced speech analysis techniques named RSF (Running Spectrum Filtering)/DRA (Dynamic Range Adjustment) in detail. The new experiments on phase recognition were carried out using 40 male and female speakers for training and 5 other male and female speakers for recognition. The result of recognition rate is improved from 17% to 63% under car noise at -10dB SNR for example. It shows the high noise robustness of the proposed system. In addition, the new parallel/pipelined LSI design of the system is proposed. It considerably reduces the calculation time. Using this architecture, the real time speech recognition can be developed. For this system, both of full-custom LSI design and FPGA design are introduced.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


Sign in / Sign up

Export Citation Format

Share Document