Optical scanner assisted robotic assembly

2017 ◽  
Vol 37 (4) ◽  
pp. 434-441 ◽  
Author(s):  
Jakub Wojciechowski ◽  
Marcin Suszynski

Purpose This paper aims to propose the method of automatic robotic assembly of two or more parts placed without fixing instrumentation and positioning on the pallet. Design/methodology/approach Assembly tasks performed by industrial robots are usually based on a constant program, extensive tooling, fixing objects in a given place and a relatively limited sensory system. In this study, a different approach is presented. The industrial robot program is adjusted to the location of parts for assembly in the work space. This leads to a transition from a clearly defined assembly sequence realized by the industrial robot to the one in which the order of execution of the assembly operations can be determined by the mutual position of parts to be assembled. Findings The method presented in this study combines many already known algorithms. The contribution of the authors is to test and select the appropriate combination of methods capable of supporting robotic assembly process based on data from optical 3D scanners. The sequence of operations from scanning to place the parts in the installation position by an industrial robot is developed. A set of parameters for selected methods is presented. The result is a universal procedure that determines the position of the preset models in partial measurements performed at a fixed relative position of the sensor, the measurement object. Originality/value The developed procedure for determining the position of the parts is essential to develop a flexible robotic assembly system. It will be able to perform the task of assembly on the basis of appropriate search algorithms taking into account the selected and implemented sequence of assembly position and orientation of parts, particularly the base unit freely placed on an assembly pallete. It is also the basis of a system for testing different algorithms to optimize the flexible robotic assembly.

Author(s):  
Yi Liu ◽  
Ming Cong ◽  
Hang Dong ◽  
Dong Liu

Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.


Author(s):  
LianZheng Ge ◽  
Jian Chen ◽  
Ruifeng Li ◽  
Peidong Liang

Purpose The global performance of industrial robots partly depends on the properties of drive system consisting of motor inertia, gearbox inertia, etc. This paper aims to deal with the problem of optimization of global dynamic performance for robotic drive system selected from available components. Design/methodology/approach Considering the performance specifications of drive system, an optimization model whose objective function is composed of working efficiency and natural frequency of robots is proposed. Meanwhile, constraints including the rated and peak torque of motor, lifetime of gearbox and light-weight were taken into account. Furthermore, the mapping relationship between discrete optimal design variables and component properties of drive system were presented. The optimization problem with mixed integer variables was solved by a mixed integer-laplace crossover power mutation algorithm. Findings The optimization results show that our optimization model and methods are applicable, and the performances are also greatly promoted without sacrificing any constraints of drive system. Besides, the model fits the overall performance well with respect to light-weight ratio, safety, cost reduction and others. Practical implications The proposed drive system optimization method has been used for a 4-DOF palletizing robot, which has been largely manufactured in a factory. Originality/value This paper focuses on how the simulation-based optimization can be used for the purpose of generating trade-offs between cost, performance and lifetime when designing robotic drive system. An applicable optimization model and method are proposed to handle the dynamic performance optimization problem of a drive system for industrial robot.


Author(s):  
Guanghui Liu ◽  
Qiang Li ◽  
Lijin Fang ◽  
Bing Han ◽  
Hualiang Zhang

Purpose The purpose of this paper is to propose a new joint friction model, which can accurately model the real friction, especially in cases with sudden changes in the motion direction. The identification and sensor-less control algorithm are investigated to verify the validity of this model. Design/methodology/approach The proposed friction model is nonlinear and it considers the angular displacement and angular velocity of the joint as a secondary compensation for identification. In the present study, the authors design a pipeline – including a manually designed excitation trajectory, a weighted least squares algorithm for identifying the dynamic parameters and a hand guiding controller for the arm’s direct teaching. Findings Compared with the conventional joint friction model, the proposed method can effectively predict friction factors during the dynamic motion of the arm. Then friction parameters are quantitatively obtained and compared with the proposed friction model and the conventional friction model indirectly. It is found that the average root mean square error of predicted six joints in the proposed method decreases by more than 54%. The arm’s force control with the full torque using the estimated dynamic parameters is qualitatively studied. It is concluded that a light-weight industrial robot can be dragged smoothly by the hand guiding. Practical implications In the present study, a systematic pipeline is proposed for identifying and controlling an industrial arm. The whole procedure has been verified in a commercial six DOF industrial arm. Based on the conducted experiment, it is found that the proposed approach is more accurate in comparison with conventional methods. A hand-guiding demo also illustrates that the proposed approach can provide the industrial arm with the full torque compensation. This essential functionality is widely required in many industrial arms such as kinaesthetic teaching. Originality/value First, a new friction model is proposed. Based on this model, identifying the dynamic parameter is carried out to obtain a set of model parameters of an industrial arm. Finally, a smooth hand guiding control is demonstrated based on the proposed dynamic model.


Author(s):  
Yang Chuangui ◽  
Liu Xingbao ◽  
Yue Xiaobin ◽  
Mi Liang ◽  
Wang Junwen ◽  
...  

PurposeThis paper aims to solve the nonlinear problem in the uncertainty evaluation of the measurement of the positioning repeatability (RP) of industrial robots and provide guidance to restrict the uncertainty of measurement of RP (uRP).Design/methodology/approachFirstly, some uncertain sources existing in the measurement procedure of RP are identified. Secondly, the probability distribution function (PDF) of every source is established on the basis of its measurements. Some spatial combined normal distributions are adopted. Then, a method, based on Monte Carlo method (MCM) and established measurement model, is developed for the estimation ofuRP. Thirdly, some tests are developed for the identification and validation of the selected PDFs of uncertain sources. Afterwards, the proposed method is applied for the evaluation and validation of theuRP. Finally, influence analyses of some key factors are proposed for the quantification of their relative contributions touRP.FindingsResults show that the proposed method can reasonably and objectively estimate theuRPof the selected industrial robot, and changes of the industrial robots’ position and the laser trackers measurement are correlated. Additionally, theuRPof the selected industrial robot can be restricted by using the results of its key factors onuRP.Originality/valueThis paper proposes the spatial combined normal distribution to model the uncertainty of the repeatability of the laser tracker and industrial robot. Meanwhile, the proposed method and influence analyses can be used in estimating and restricting theuRPand thus useful in determining whether the RP of a tested industrial robot meets its requirements.


Author(s):  
Mustafa Can Bingol ◽  
Omur Aydogmus

Purpose Because of the increased use of robots in the industry, it has become inevitable for humans and robots to be able to work together. Therefore, human security has become the primary noncompromising factor of joint human and robot operations. For this reason, the purpose of this study was to develop a safe human-robot interaction software based on vision and touch. Design/methodology/approach The software consists of three modules. Firstly, the vision module has two tasks: to determine whether there is a human presence and to measure the distance between the robot and the human within the robot’s working space using convolutional neural networks (CNNs) and depth sensors. Secondly, the touch detection module perceives whether or not a human physically touches the robot within the same work environment using robot axis torques, wavelet packet decomposition algorithm and CNN. Lastly, the robot’s operating speed is adjusted according to hazard levels came from vision and touch module using the robot’s control module. Findings The developed software was tested with an industrial robot manipulator and successful results were obtained with minimal error. Practical implications The success of the developed algorithm was demonstrated in the current study and the algorithm can be used in other industrial robots for safety. Originality/value In this study, a new and practical safety algorithm is proposed and the health of people working with industrial robots is guaranteed.


Author(s):  
Gilbert Tang ◽  
Seemal Asif ◽  
Phil Webb

Purpose – The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely. Design/methodology/approach – The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task. Findings – It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots. Originality/value – The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.


Author(s):  
J.F. Aviles-Viñas ◽  
I. Lopez-Juarez ◽  
R. Rios-Cabrera

Purpose – The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics. Design/methodology/approach – By using an optic camera to measure the bead geometry (width and height), the authors propose a real-time computer vision algorithm to extract training patterns and to enable an industrial robot to acquire and learn autonomously the welding skill. To test the approach, an industrial KUKA robot and a welding gas metal arc welding machine were used in a manufacturing cell. Findings – Several data analyses are described, showing empirically that industrial robots can acquire the skill even if the specific welding parameters are unknown. Research limitations/implications – The approach considers only stringer beads. Weave bead and bead penetration are not considered. Practical implications – With the proposed approach, it is possible to learn specific welding parameters despite of the material, type of robot or welding machine. This is due to the fact that the feedback system produces automatic measurements that are labelled prior to the learning process. Originality/value – The main contribution is that the complex learning process is reduced into an input-process-output system, where the process part is learnt automatically without human supervision, by registering the patterns with an automatically calibrated vision system.


Author(s):  
Shuyang Chen ◽  
Yuan-Chi Peng ◽  
John Wason ◽  
Jinda Cui ◽  
Glenn Saunders ◽  
...  

This paper presents the design and initial results of a project involving the robotic assembly of a large segmented structure. This project aims to develop an operator-guided semi-autonomous assembly process using industrial robots integrated with multiple sensors. The goal is to demonstrate the potential of robotic technology to reduce cycle time, enhance assembly quality, and improve worker ergonomics, as compared to the current manual or fixture-based approaches. The focus is primarily on the software framework which is composed of a collection of commercial and customized components for robot positioning, motion planning, low latency teleoperation, visualization and simulation. A foundation step of the implementation is safe teleoperation which allows the user to operate the robot without concern of collision or joint limits. The concept has been demonstrated in RobotStudio, the simulation environment for ABB robots, and a physical ABB robot. While some of the software is specific to the ABB industrial robot used in the project, the framework is readily adapted to other industrial robots that allow externally commanded motion.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Megha G. Krishnan ◽  
Abhilash T. Vijayan ◽  
Ashok S.

Purpose Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have dedicated controllers and languages. However, robot operation would require either the knowledge of additional software or expensive add-on installations for effective communication between the robot controller and the computation software. This paper aims to present a novel method of interfacing the commercial robot controllers with most widely used simulation platform, e.g. MATLAB in real-time with a demonstration of visual predictive controller. Design/methodology/approach A remote personal computer (PC), running MATLAB, is connected with the IRC5 controller of an ABB robotic arm through the File Transfer Protocol (FTP). FTP server on the IRC5 responds to a request from an FTP client (MATLAB) on a remote computer. MATLAB provides the basic platform for programming and control algorithm development. The controlled output is transferred to the robot controller through Ethernet port as files and, thereby, the proposed scheme ensures connection and control of the robot using the control algorithms developed by the researchers without the additional cost of buying add-on packages or mastering vendor-specific programming languages. Findings New control strategies and contrivances can be developed with numerous conditions and constraints in simulation platforms. When the results are to be implemented in real-time systems, the proposed method helps to establish a simple, fast and cost-effective communication with commercial robot controllers for validating the real-time performance of the developed control algorithm. Practical implications The proposed method is used for real-time implementation of visual servo control with predictive controller, for accurate pick-and-place application with different initial conditions. The same strategy has been proven effective in supervisory control using two cameras and artificial neural network-based visual control of robotic manipulators. Originality/value This paper elaborates a real-time example using visual servoing for researchers working with industrial robots, enabling them to understand and explore the possibilities of robot communication.


2020 ◽  
Vol 10 (13) ◽  
pp. 4619 ◽  
Author(s):  
Matteo Bottin ◽  
Silvio Cocuzza ◽  
Nicola Comand ◽  
Alberto Doria

The stiffness properties of industrial robots are very important for many industrial applications, such as automatic robotic assembly and material removal processes (e.g., machining and deburring). On the one hand, in robotic assembly, joint compliance can be useful for compensating dimensional errors in the parts to be assembled; on the other hand, in material removal processes, a high Cartesian stiffness of the end-effector is required. Moreover, low frequency chatter vibrations can be induced when low-stiffness robots are used, with an impairment in the quality of the machined surface. In this paper, a compliant joint dynamic model of an industrial robot has been developed, in which joint stiffness has been experimentally identified using a modal approach. First, a novel method to select the test configurations has been developed, so that in each configuration the mode of vibration that chiefly involves only one joint is excited. Then, experimental tests are carried out in the selected configurations in order to identify joint stiffness. Finally, the developed dynamic model of the robot is used to predict the variation of the natural frequencies in the workspace.


Sign in / Sign up

Export Citation Format

Share Document