scholarly journals 3D Robot Vision System through 2D Shape Based Matching Using Gaussian Smoothing for Gluing Application

2018 ◽  
Vol 7 (4.33) ◽  
pp. 487
Author(s):  
Mohamad Haniff Harun ◽  
Mohd Shahrieel Mohd Aras ◽  
Mohd Firdaus Mohd Ab Halim ◽  
Khalil Azha Mohd Annuar ◽  
Arman Hadi Azahar ◽  
...  

This investigation is solely on the adaptation of a vision system algorithm to classify the processes to regulate the decision making related to the tasks and defect’s recognition. These idea stresses on the new method on vision algorithm which is focusing on the shape matching properties to classify defects occur on the product. The problem faced before that the system required to process broad data acquired from the object caused the time and efficiency slightly decrease. The propose defect detection approach combine with Region of Interest, Gaussian smoothing, Correlation and Template Matching are introduced. This application provides high computational savings and results in better recognition rate about 95.14%. The defects occur provides with information of the height which corresponds by the z-coordinate, length which corresponds by the y-coordinate and width which corresponds by the x-coordinate. This data gathered from the proposed system using dual camera for executing the three dimensional transformation.  

Author(s):  
M. H. Harun ◽  
M. F. Yaakub ◽  
A. F. Z. Abidin ◽  
A. H. Azahar ◽  
M. S. M. Aras ◽  
...  

<p>This paper investigates various approaches for automated inspection of gluing process using shape-based matching application. A new supervised defect detection approach to detect a class of defects in gluing application is proposed. Creating of region of interest in important region of object is discussed. Gaussian smoothing features is proposed in determining better image processing. Template matching in differentiates between reference and tested image are proposed. This scheme provides high computational savings and results in high defect detection recognition rate. The defects are broadly classified into three classes: 1) gap defect; 2) bumper defect; 3) bubble defect. This system does lessen execution time, yet additionally produce high precision in deformity location rate. It is discovered that the proposed framework can give precision at 95.77% recognition rate in recognizing imperfection for gluing application.</p>


2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094237
Author(s):  
Yu He ◽  
Shengyong Chen

The developing time-of-flight (TOF) camera is an attractive device for the robot vision system to capture real-time three-dimensional (3D) images, but the sensor suffers from the limit of low resolution and precision of images. This article proposes an approach to automatic generation of an imaging model in the 3D space for error correction. Through observation data, an initial coarse model of the depth image can be obtained for each TOF camera. Then, its accuracy is improved by an optimization method. Experiments are carried out using three TOF cameras. Results show that the accuracy is dramatically improved by the spatial correction model.


Author(s):  
Yun Ji ◽  
Rajeev Kumar ◽  
Daljeet Singh ◽  
Maninder Singh

In this paper, an agricultural robot vision system is proposed for two typical environments—farmland and orchard—combined with weeding between crops. The system includes orchard production monitoring and prediction tasks, the target information recognition approach, and visual servo decision making. The results obtained from the proposed system show that using the region combination features of image 2D histogram as the decision-making basis, the accurate and rapid indirect identification and positioning of crop seedlings can be accomplished while skipping the complex process of accurately identifying crops and weeds. The algorithm performs reasonably good as the time of target recognition in the prototype system is found to be less than 16 ms, and the average accurate recognition rate of 97.43% is achieved. The benefits of the proposed system are the continuous improvement of the quality of agricultural products, the rise of production efficiency, and the increase of economic benefits.


2012 ◽  
Vol 462 ◽  
pp. 603-608
Author(s):  
Yong Gang Xie ◽  
Zhong Min Wang ◽  
Shi Tao Su

Timeliness and accuracy is a key to be resolved on robot binocular measurement. In this paper, a kind of robot vision projection has been completely established. It analyzes the principle of binocular ranging in three aspects, makes the calculation concise and easy to understand, and expands the range of effective distance. On binocular image processing, we have proposed a gray-scale computing for, firstly, generating characteristic area, then, executing template matching in the area, finally, extracting feature points and matching them in the templates. It ensures certain robustness to noise spots and tries its best to avoid mismatches. The experiments show that the robot vision system has a better accuracy and a low time complexity, and the robot can react in real time.


2020 ◽  
Vol 12 (24) ◽  
pp. 4192
Author(s):  
Gang Tang ◽  
Shibo Liu ◽  
Iwao Fujino ◽  
Christophe Claramunt ◽  
Yide Wang ◽  
...  

Ship detection from high-resolution optical satellite images is still an important task that deserves optimal solutions. This paper introduces a novel high-resolution image network-based approach based on the preselection of a region of interest (RoI). This pre-selected network first identifies and extracts a region of interest from input images. In order to efficiently match ship candidates, the principle of our approach is to distinguish suspected areas from the images based on hue, saturation, value (HSV) differences between ships and the background. The whole approach is the basis of an experiment with a large ship dataset, consisting of Google Earth images and HRSC2016 datasets. The experiment shows that the H-YOLO network, which uses the same weight training from a set of remote sensing images, has a 19.01% higher recognition rate and a 16.19% higher accuracy than applying the you only look once (YOLO) network alone. After image preprocessing, the value of the intersection over union (IoU) is also greatly improved.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7121
Author(s):  
Yongchao Luo ◽  
Shipeng Li ◽  
Di Li

Robot control based on visual information perception is a hot topic in the industrial robot domain and makes robots capable of doing more things in a complex environment. However, complex visual background in an industrial environment brings great difficulties in recognizing the target image, especially when a target is small or far from the sensor. Therefore, target recognition is the first problem that should be addressed in a visual servo system. This paper considers common complex constraints in industrial environments and proposes a You Only Look Once Version 2 Region of Interest (YOLO-v2-ROI) neural network image processing algorithm based on machine learning. The proposed algorithm combines the advantages of YOLO (You Only Look Once) rapid detection with effective identification of ROI (Region of Interest) pooling structure, which can quickly locate and identify different objects in different fields of view. This method can also lead the robot vision system to recognize and classify a target object automatically, improve robot vision system efficiency, avoid blind movement, and reduce the calculation load. The proposed algorithm is verified by experiments. The experimental result shows that the learning algorithm constructed in this paper has real-time image-detection speed and demonstrates strong adaptability and recognition ability when processing images with complex backgrounds, such as different backgrounds, lighting, or perspectives. In addition, this algorithm can also effectively identify and locate visual targets, which improves the environmental adaptability of a visual servo system


2021 ◽  
Vol 2107 (1) ◽  
pp. 012037
Author(s):  
K S Tan ◽  
M N Ayob ◽  
H B Hassrizal ◽  
A H Ismail ◽  
M S Muhamad Azmi ◽  
...  

Abstract Vision aided pick and place cartesian robot is a combination of machine vision system and robotic system. They communicate with each other simultaneously to perform object sorting. In this project, machine vision algorithm for object sorting to solve the problem in failure sorting due to imperfection of images edges and different types of colours is proposed. The image is acquired by a camera and followed by image calibration. Pre-processing of image is performed through these methods, which are HSI colour space transformation, Gaussian filter for image filtering, Otsu’s method for image binarization, and Canny edge detection. LabVIEW edge-based geometric matching is selected for template matching. After the vision application analysed the image, electrical signal will send to robotic arm for object sorting if the acquired image is matched with template image. The proposed machine vision algorithm has yielded an accurate template matching score from 800 to 1000 under different disturbances and conditions. This machine vision algorithm provides more customizable parameters for each methods yet improves the accuracy of template matching.


1987 ◽  
Vol 31 (11) ◽  
pp. 1281-1285
Author(s):  
John G. Kreifeldt ◽  
Ming C. Chuang

A description of a novel and very speculative approach to new research directions for human vision with application to robotic vision is described. The goal of the approach is to propose a plausible, implementable, spatial perception model for human vision and apply this model to a stereo robot vision system. The model is based on computer algorithms variously called “Multidimensional Scaling”, well known in psychology and sociology but relatively unknown in engineering. These algorithms can reconstruct a spatially accurate model to a high level of metric precision of a “configuration of points” from low quality, error prone non-metric data about the configuration. ALSCAL – a general purpose computer package adaptable for this purpose is being presently evaluated. This is a departure from typical engineering approaches which are directed toward gathering a low volume of highly precise referenced data about the positions of selected points in the visual scene and substitutes instead an approach of gathering a high volume of very low precision relative data about the interpoint spacings. It would seem that the latter approach is the one actually used by the human vision system. The results are highly encouraging in that the agreement between test configurations of two and three dimensional configurations of points are very faithfully reconstructed from as low as 10 points in a configuration using only rank ordered (i.e. nonmetric) information about interpoint spacings. The reconstructions are remarkably robust even under human-like “fuzzy” imprecision in visual measurements.


2013 ◽  
Vol 561 ◽  
pp. 515-520
Author(s):  
Yu Xia Cui ◽  
Yang Li ◽  
Hua Jie Wang ◽  
Xian Lun Wang

A template matching method based on a vision system is proposed to get the location of terminal blocks. Gaussian pyramid decomposition is used to get the source image and template image samples, which will reduce the matching time and thus fulfill the requirement of real time application. The positioning screws used as mark points can be obtained by binarization, erosion, and dilation, and then the location of them can be worked out by the centre-of-gravity method. Finally, the location and the rotation angle of terminal blocks can be obtained with least square method. Experimental results show that this method is convenient to be operated with high recognition rate and efficiency.


Sign in / Sign up

Export Citation Format

Share Document