scholarly journals Towards Autonomous Robotic Assembly: Using Combined Visual and Tactile Sensing for Adaptive Task Execution

2021 ◽  
Vol 101 (3) ◽  
Author(s):  
Korbinian Nottensteiner ◽  
Arne Sachtler ◽  
Alin Albu-Schäffer

AbstractRobotic assembly tasks are typically implemented in static settings in which parts are kept at fixed locations by making use of part holders. Very few works deal with the problem of moving parts in industrial assembly applications. However, having autonomous robots that are able to execute assembly tasks in dynamic environments could lead to more flexible facilities with reduced implementation efforts for individual products. In this paper, we present a general approach towards autonomous robotic assembly that combines visual and intrinsic tactile sensing to continuously track parts within a single Bayesian framework. Based on this, it is possible to implement object-centric assembly skills that are guided by the estimated poses of the parts, including cases where occlusions block the vision system. In particular, we investigate the application of this approach for peg-in-hole assembly. A tilt-and-align strategy is implemented using a Cartesian impedance controller, and combined with an adaptive path executor. Experimental results with multiple part combinations are provided and analyzed in detail.

Author(s):  
Brian J. Slaboch ◽  
Philip Voglewede

This paper introduces the Underactuated Part Alignment System (UPAS) as a cost-effective and flexible approach to aligning parts in the vertical plane prior to an industrial robotic assembly task. The advantage of the UPAS is that it utilizes the degrees of freedom (DOFs) of a SCARA (Selective Compliant Assembly Robot Arm) type robot in conjunction with an external fixed post to achieve the desired part alignment. Three path planning techniques will be presented that can be used with the UPAS to achieve the proper part rotation.


1995 ◽  
Vol 117 (3) ◽  
pp. 384-393 ◽  
Author(s):  
B. J. McCarragher ◽  
H. Asada

A new approach to process modeling, task synthesis, and motion control for robotic assembly is presented. Assembly is modeled as a discrete event dynamic system using Petri nets, incorporating both discrete and continuous aspects of the process. The discrete event modelling facilitates a new, task-level approach to the control of robotic assembly. To accomplish a desired trajectory a discrete event controller is developed. The controller issues velocity commands that direct the system toward the next desired contact state, while maintaining currently desired contacts and avoiding unwanted transitions. Experimental results are given for a dual peg-in-the-hole example. The experimental results not only demonstrate highly successful insertion along the optimal trajectory, but also demonstrate the ability to detect, recognize and recover from errors and unwanted situations.


Machines ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 59
Author(s):  
Martin Dahl ◽  
Kristofer Bengtsson ◽  
Petter Falkman

Future automation systems are likely to include devices with a varying degree of autonomy, as well as advanced algorithms for perception and control. Human operators will be expected to work side by side with both collaborative robots performing assembly tasks and roaming robots that handle material transport. To maintain the flexibility provided by human operators when introducing such robots, these autonomous robots need to be intelligently coordinated, i.e., they need to be supported by an intelligent automation system. One challenge in developing intelligent automation systems is handling the large amount of possible error situations that can arise due to the volatile and sometimes unpredictable nature of the environment. Sequence Planner is a control framework that supports the development of intelligent automation systems. This paper describes Sequence Planner and tests its ability to handle errors that arise during execution of an intelligent automation system. An automation system, developed using Sequence Planner, is subjected to a number of scenarios where errors occur. The error scenarios and experimental results are presented along with a discussion of the experience gained in trying to achieve robust intelligent automation.


Author(s):  
Kensuke Harada ◽  
Weiwei Wan ◽  
Tokuo Tsuji ◽  
Kohei Kikuchi ◽  
Kazuyuki Nagata ◽  
...  

Purpose This paper aims to automate the picking task needed in robotic assembly. Parts supplied to an assembly process are usually randomly staked in a box. If randomized bin-picking is introduced to a production process, we do not need any part-feeding machines or human workers to once arrange the objects to be picked by a robot. The authors introduce a learning-based method for randomized bin-picking. Design/methodology/approach The authors combine the learning-based approach on randomized bin-picking (Harada et al., 2014b) with iterative visual recognition (Harada et al., 2016a) and show additional experimental results. For learning, we use random forest explicitly considering the contact between a finger and a neighboring object. The iterative visual recognition method iteratively captures point cloud to obtain more complete point cloud of piled object by using 3D depth sensor attached at the wrist. Findings Compared with the authors’ previous research (Harada et al., 2014b) (Harada et al., 2016a), their new finding is as follows: by using random forest, the number of training data becomes extremely small. By adding penalty to occluded area, the learning-based method predicts the success after point cloud with less occluded area. We analyze the calculation time of the iterative visual recognition. We furthermore make clear the cases where a finger contacts neighboring objects. Originality/value The originality exists in the part where the authors combined the learning-based approach with the iterative visual recognition and supplied additional experimental results. After obtaining the complete point cloud of the piled object, prediction becomes effective.


2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Pyung-Han Kim ◽  
Eun-Jun Yoon ◽  
Kwan-Woo Ryu ◽  
Ki-Hyun Jung

Data hiding is a technique that hides the existence of secret data from malicious attackers. In this paper, we propose a new data-hiding scheme using multidirectional pixel-value differencing, which can embed secret data in two directions or three directions on colour images. The cover colour image is divided into nonoverlapping blocks, and the pixels of each block are decomposed into R, G, and B channels. The pixels of each block perform regrouping, and then the minimum pixel value within each block is selected. The secret data can be embedded into two directions or three directions based on the minimum pixel value by using the difference value for the block. The pixel pairs with the embedded secret data are put separately into two stego images for secret data extraction on receiver sides. In the extraction process, the secret data can be extracted using the difference value of the two stego images. Experimental results show that the proposed scheme has the highest embedding capacity when the secret data are embedded into three directions. Experimental results also show that the proposed scheme has a high embedding capacity while maintaining the degree of distortion that cannot be perceived by human vision system for two directions.


2017 ◽  
Vol 90 ◽  
pp. 4-14 ◽  
Author(s):  
Francesco Amigoni ◽  
Matteo Luperto ◽  
Viola Schiaffonati

Sign in / Sign up

Export Citation Format

Share Document