A 3D Vision-Based Solution for Product Picking In Industrial Applications

Author(s):  
Mirko Sgarbi ◽  
Valentina Colla ◽  
Gianluca Bioli

Computer vision is nowadays a key factor in many manufacturing processes. Among all possible applications like quality control, assembly verification and component tracking, the robot guidance for pick and place operations can assume an important role in increasing the automation level of production lines. While 3D vision systems are now emerging as valid solutions in bin-picking applications, where objects are randomly placed inside a box, 2D vision systems are widely and successfully adopted when objects are placed on a conveyor belt and the robot manipulator can grasp the object by exploiting only the 2D information. On the other hand, there are many real-world applications where the 3rd dimension is required by the picking system. For example, the objects can differ in their height or they can be manually placed in front of the camera without any constraint on the distance between the object and the camera itself. Although a 3D vision system could represent a possible solution, 3D systems are more complex, more expensive and less compact than 2D vision systems. This chapter describes a monocular system useful for picking applications. It can estimate the 3D position of a single marker attached to the target object assuming that the orientation of the object is approximately known.

Author(s):  
Jianhua Su ◽  
Zhi-Yong Liu ◽  
Hong Qiao ◽  
Chuankai Liu

Purpose – Picking up pistons in arbitrary poses is an important step on car engine assembly line. The authors usually use vision system to estimate the pose of the pistons and then guide a stable grasp. However, a piston in some poses, e.g. the mouth of the piston faces forward, is hardly to be directly grasped by the gripper. Thus, we need to reorient the piston to achieve a desired pose, i.e. let its mouth face upward, for grasping. Design/methodology/approach – This paper aims to present a vision-based picking system that can grasp pistons in arbitrary poses. The whole picking process is divided into two stages. At localization stage, a hierarchical approach is proposed to estimate the piston’s pose from image which usually involves both heavy noise and edge distortions. At grasping stage, multi-step robotic manipulations are designed to enable the piston to follow a nominal trajectory to reach to the minimum of the distance between the piston’s center and the support plane. That is, under the design input, the piston would be pushed to achieve a desired orientation. Findings – A target piston in arbitrary poses would be picked from the conveyor belt by the gripper with the proposed method. Practical implications – The designed robotic bin-picking system using vision is an advantage in terms of flexibility in automobile manufacturing industry. Originality/value – The authors develop a methodology that uses a pneumatic gripper and 2D vision information for picking up multiple pistons in arbitrary poses. The rough pose of the parts are detected based on a hierarchical approach for detection of multiple ellipses in the environment that usually involve edge distortions. The pose uncertainties of the piston are eliminated by multi-step robotic manipulations.


2021 ◽  
Vol 12 (1) ◽  
pp. 286
Author(s):  
Radovan Holubek ◽  
Marek Vagaš

In advanced manufacturing technologies (including complex automated processes) and their branches of industry, perception and evaluation of the object parameters are the most critical factors. Many production machines and workplaces are currently equipped as standard with high-quality special sensing devices based on vision systems to detect these parameters. This article focuses on designing a reachable and fully functional vision system based on two standard CCD cameras usage, while the emphasis is on the RS 232C communication interface between two sites (vision and robotic systems). To this, we combine principles of the 1D photogrammetric calibration method from two known points at a stable point field and the available packages inside the processing unit of the vision system (as filtering, enhancing and extracting edges, weak and robust smoothing, etc.). A correlation factor at camera system (for reliable recognition of the sensed object) was set from 84 to 100%. Then, the pilot communication between both systems was proposed and then tested through CREAD/CWRITE commands according to protocol 3964R (used for the data transfer). Moreover, the system was proven by successful transition of the data into the robotic system. Since research gaps in this field still exist and many vision systems are based on PC processing or intelligent cameras, our potential research topic tries to provide the price–performance ratio solution for those who cannot regularly invest in the newest vision technology; however, they could still do so to stay competitive.


2020 ◽  
Vol 1 (1) ◽  
pp. 47-53
Author(s):  
S. HORIASHCHENKO ◽  
◽  
K. HORIASHCHENKO ◽  

The article presents a variant of working with a system of technical vision, which recognizes cylindrical objects. This vision system based on artificial intelligence, which allows you to determine the circles in the image. The coordinates of the value of the circle are necessary for the exact positioning of the robot manipulator. The calculation of the gradient and threshold separation determine the gaps in the intensity of the image of the object. These methods define pixels lying on the border between the object and the background. The further process consists in connection of the segments of a contour separated by small intervals, and in association of separate short segments. Thus, contour detection algorithms accompanies by procedures for constructing object boundaries from the corresponding pixel sequences. The resulting image has sufficient information for artificial intelligence analysis to detect the circle. The software is developed and experimentally tested in operation. The operation of the technical vision system experimentally tested in the work, namely the capture of a cylindrical object. Work vision systems has been experimentally proven to work and is passionate cylindrical object. The coordinates of the value of the circle, which are necessary for the exact location of the robot manipulator were determined by artificial intelligence in 41 milliseconds. The obtained coordinates were transmitted to the microprocessor to adjust the position of the manipulator. The robot accurately captured a cylindrical object.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qian-Bing Zhu ◽  
Bo Li ◽  
Dan-Dan Yang ◽  
Chi Liu ◽  
Shun Feng ◽  
...  

AbstractThe challenges of developing neuromorphic vision systems inspired by the human eye come not only from how to recreate the flexibility, sophistication, and adaptability of animal systems, but also how to do so with computational efficiency and elegance. Similar to biological systems, these neuromorphic circuits integrate functions of image sensing, memory and processing into the device, and process continuous analog brightness signal in real-time. High-integration, flexibility and ultra-sensitivity are essential for practical artificial vision systems that attempt to emulate biological processing. Here, we present a flexible optoelectronic sensor array of 1024 pixels using a combination of carbon nanotubes and perovskite quantum dots as active materials for an efficient neuromorphic vision system. The device has an extraordinary sensitivity to light with a responsivity of 5.1 × 107 A/W and a specific detectivity of 2 × 1016 Jones, and demonstrates neuromorphic reinforcement learning by training the sensor array with a weak light pulse of 1 μW/cm2.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4515
Author(s):  
Rinku Roy ◽  
Manjunatha Mahadevappa ◽  
Kianoush Nazarpour

Humans typically fixate on objects before moving their arm to grasp the object. Patients with ALS disorder can also select the object with their intact eye movement, but are unable to move their limb due to the loss of voluntary muscle control. Though several research works have already achieved success in generating the correct grasp type from their brain measurement, we are still searching for fine controll over an object with a grasp assistive device (orthosis/exoskeleton/robotic arm). Object orientation and object width are two important parameters for controlling the wrist angle and the grasp aperture of the assistive device to replicate a human-like stable grasp. Vision systems are already evolved to measure the geometrical attributes of the object to control the grasp with a prosthetic hand. However, most of the existing vision systems are integrated with electromyography and require some amount of voluntary muscle movement to control the vision system. Due to that reason, those systems are not beneficial for the users with brain-controlled assistive devices. Here, we implemented a vision system which can be controlled through the human gaze. We measured the vertical and horizontal electrooculogram signals and controlled the pan and tilt of a cap-mounted webcam to keep the object of interest in focus and at the centre of the picture. A simple ‘signature’ extraction procedure was also utilized to reduce the algorithmic complexity and system storage capacity. The developed device has been tested with ten healthy participants. We approximated the object orientation and the size of the object and determined an appropriate wrist orientation angle and the grasp aperture size within 22 ms. The combined accuracy exceeded 75%. The integration of the proposed system with the brain-controlled grasp assistive device and increasing the number of grasps can offer more natural manoeuvring in grasp for ALS patients.


Forests ◽  
2018 ◽  
Vol 9 (1) ◽  
pp. 30 ◽  
Author(s):  
Andrzej Sioma ◽  
Jarosław Socha ◽  
Anna Klamerus-Iwan
Keyword(s):  

1997 ◽  
Vol 119 (2) ◽  
pp. 151-160 ◽  
Author(s):  
Y. M. Zhang ◽  
R. Kovacevic

Seam tracking and weld penetration control are two fundamental issues in automated welding. Although the seam tracking technique has matured, the latter still remains a unique unsolved problem. It was found that the full penetration status during GTA welding can be determined with sufficient accuracy using the sag depression. To achieve a new full penetration sensing technique, a structured-light 3D vision system is developed to extract the sag geometry behind the pool. The laser stripe, which is the intersection of the structured-light and weldment, is thinned and then used to acquire the sag geometry. To reduce possible control delay, a small distance is selected between the pool rear and laser stripe. An adaptive dynamic search for rapid thinning of the stripe and the maximum principle of slope difference for unbiased recognition of sag border were proposed to develop an effective real-time image processing algorithm for sag geometry acquisition. Experiments have shown that the proposed sensor and image algorithm can provide reliable feedback information of sag geometry for the full penetration control system.


2012 ◽  
Vol 11 (3) ◽  
pp. 9-17 ◽  
Author(s):  
Sébastien Kuntz ◽  
Ján Cíger

A lot of professionals or hobbyists at home would like to create their own immersive virtual reality systems for cheap and taking little space. We offer two examples of such "home-made" systems using the cheapest hardware possible while maintaining a good level of immersion: the first system is based on a projector (VRKit-Wall) and cost around 1000$, while the second system is based on a head-mounted display (VRKit-HMD) and costs between 600� and 1000�. We also propose a standardization of those systems in order to enable simple application sharing. Finally, we describe a method to calibrate the stereoscopy of a NVIDIA 3D Vision system.


Sign in / Sign up

Export Citation Format

Share Document