Automated Robotic Assembly of 3D Mesostructure via Guided Mechanical Buckling

Author(s):  
Ying Cai ◽  
Zhonghao Han ◽  
Trey Cranney ◽  
Hangbo Zhao ◽  
Satyandra K. Gupta
Author(s):  
Varun Kumar ◽  
Lakshya Gaur ◽  
Arvind Rehalia

In this paper the authors have explained the development of robotic vehicle prepared by them, which operates autonomously and is not controlled by the users, except for selection of modes. The different modes of the automated vehicle are line following, object following and object avoidance with alternate trajectory determination. The complete robotic assembly is mounted on a chassis comprising of Arduino Uno, Servo motors, HC-SRO4 (Ultrasonic sensor), DC motors (Geared), L293D Motor Driver, IR proximity sensors, Voltage Regulator along with castor wheel and two normal wheels.


2016 ◽  
Vol 26 (17) ◽  
pp. 2909-2918 ◽  
Author(s):  
Yuan Liu ◽  
Zheng Yan ◽  
Qing Lin ◽  
Xuelin Guo ◽  
Mengdi Han ◽  
...  

Author(s):  
Juan Martinez-Moritz ◽  
Ismael Rodriguez ◽  
Korbinian Nottensteiner ◽  
Jean-Pascal Lutze ◽  
Peter Lehner ◽  
...  

2021 ◽  
Vol 101 (3) ◽  
Author(s):  
Korbinian Nottensteiner ◽  
Arne Sachtler ◽  
Alin Albu-Schäffer

AbstractRobotic assembly tasks are typically implemented in static settings in which parts are kept at fixed locations by making use of part holders. Very few works deal with the problem of moving parts in industrial assembly applications. However, having autonomous robots that are able to execute assembly tasks in dynamic environments could lead to more flexible facilities with reduced implementation efforts for individual products. In this paper, we present a general approach towards autonomous robotic assembly that combines visual and intrinsic tactile sensing to continuously track parts within a single Bayesian framework. Based on this, it is possible to implement object-centric assembly skills that are guided by the estimated poses of the parts, including cases where occlusions block the vision system. In particular, we investigate the application of this approach for peg-in-hole assembly. A tilt-and-align strategy is implemented using a Cartesian impedance controller, and combined with an adaptive path executor. Experimental results with multiple part combinations are provided and analyzed in detail.


Author(s):  
Yi Liu ◽  
Ming Cong ◽  
Hang Dong ◽  
Dong Liu

Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.


Author(s):  
Surjit Sharma ◽  
Bibhuti Bhusan Biswal ◽  
Parameswar Dash ◽  
Bibhuti Bhusan Choudhury

2012 ◽  
Vol 45 (7) ◽  
pp. 1192-1198 ◽  
Author(s):  
Qin Liu ◽  
Hai-Chao Han

Sign in / Sign up

Export Citation Format

Share Document