Near-Minimum Time Visual Servoing of an Underactuated Robotic Manipulator

2013 ◽  
Vol 373-375 ◽  
pp. 217-220
Author(s):  
Yacine Benbelkacem ◽  
Rosmiwati Mohd-Mokhtar

Rate of convergence to the desired pose to grasp an object using visual information may be important in some applications, such as a pick and place routine in assembly where the time between two stops of the conveyor is very short. The visually guided robot is required to move fast if vision is to bring the sought benefits to industrial setups. In this paper, the three most famous techniques to visual servoing, mainly the image-based, position-based and hybrid visual servoing are evaluated in terms of their speed of convergence to the grasping pose in a pick and place task of a momentarily motionless target. An alternative open-loop near-minimum time approach is also presented and tested on a 5DOF under-actuated robotic arm. The performance is compared and result shows significant reduction for its time of convergence, to the aforementioned techniques.

2015 ◽  
Vol 2015 (0) ◽  
pp. _2P1-W07_1-_2P1-W07_2
Author(s):  
Miyako TACHIBANA ◽  
Soichiro YAMATE ◽  
Akihiro KAWAMURA ◽  
Sadao KAWAMURA

2021 ◽  
Vol 15 ◽  
Author(s):  
Fan Zhu ◽  
Liangliang Wang ◽  
Yilin Wen ◽  
Lei Yang ◽  
Jia Pan ◽  
...  

The success of a robotic pick and place task depends on the success of the entire procedure: from the grasp planning phase, to the grasp establishment phase, then the lifting and moving phase, and finally the releasing and placing phase. Being able to detect and recover from grasping failures throughout the entire process is therefore a critical requirement for both the robotic manipulator and the gripper, especially when considering the almost inevitable object occlusion by the gripper itself during the robotic pick and place task. With the rapid rising of soft grippers, which rely heavily on their under-actuated body and compliant, open-loop control, less information is available from the gripper for effective overall system control. Tackling on the effectiveness of robotic grasping, this work proposes a hybrid policy by combining visual cues and proprioception of our gripper for the effective failure detection and recovery in grasping, especially using a proprioceptive self-developed soft robotic gripper that is capable of contact sensing. We solved failure handling of robotic pick and place tasks and proposed (1) more accurate pose estimation of a known object by considering the edge-based cost besides the image-based cost; (2) robust object tracking techniques that work even when the object is partially occluded in the system and achieve mean overlap precision up to 80%; (3) contact and contact loss detection between the object and the gripper by analyzing internal pressure signals of our gripper; (4) robust failure handling with the combination of visual cues under partial occlusion and proprioceptive cues from our soft gripper to effectively detect and recover from different accidental grasping failures. The proposed system was experimentally validated with the proprioceptive soft robotic gripper mounted on a collaborative robotic manipulator, and a consumer-grade RGB camera, showing that combining visual cues and proprioception from our soft actuator robotic gripper was effective in improving the detection and recovery from the major grasping failures in different stages for the compliant and robust grasping.


2019 ◽  
Vol 13 (3) ◽  
pp. 211-216
Author(s):  
Paweł Kołosowski ◽  
Adam Wolniakowski ◽  
Mariusz Bogdan

Abstract In the ever increasing number of robotic system applications in the industry, the robust and fast visual recognition and pose estimation of workpieces are of utmost importance. One of the ubiquitous tasks in industrial settings is the pick-and-place task where the object recognition is often important. In this paper, we present a new implementation of a work-piece sorting system using a template matching method for recognizing and estimating the position of planar workpieces with sparse visual features. The proposed framework is able to distinguish between the types of objects presented by the user and control a serial manipulator equipped with parallel finger gripper to grasp and sort them automatically. The system is furthermore enhanced with a feature that optimizes the visual processing time by automatically adjusting the template scales. We test the proposed system in a real-world setup equipped with a UR5 manipulator and provide experimental results documenting the performance of our approach.


2014 ◽  
Vol 931-932 ◽  
pp. 1417-1421
Author(s):  
Sujin Wanchat ◽  
Supattra Plermkamon ◽  
Danaipong Chetchotsak

Since a pick-and-place task plays an important role in an automatic process, it normally requires machine vision to locate an object for grasping. This paper presents a practicable method used to visually guide an object grasping a group of small, 1.1 mm diameter, screws by using an inexpensive webcam with a resolution of 640 x 480. A basic feedforward neural network is utilized to make a fitting function which associates pixel coordinates of the camera to the physical coordinates of the robot while the method of linear least squares is used for comparison in parallel. The result from the feedforward neural network shows that fifty screws can be completely manipulated from a tray after their physical coordinates are loaded into the robot while the result from the method of linear least squares shows failure when picking two of the samples.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2017 ◽  
Vol 372 (1717) ◽  
pp. 20160077 ◽  
Author(s):  
Anna Honkanen ◽  
Esa-Ville Immonen ◽  
Iikka Salmela ◽  
Kyösti Heimonen ◽  
Matti Weckström

Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’.


2018 ◽  
Vol 12 (2) ◽  
pp. JAMDSM0061-JAMDSM0061
Author(s):  
Yanjiang HUANG ◽  
Ryosuke CHIBA ◽  
Tamio ARAI ◽  
Tsuyoshi UEYAMA ◽  
Xianmin ZHANG ◽  
...  

Author(s):  
Mostafa Bagheri ◽  
Miroslav Krstić ◽  
Peiman Naseradinmousavi

In this paper, a predictor-based controller for a 7-DOF Baxter manipulator is formulated to compensate a time-invariant input delay during a pick-and-place task. Robot manipulators are extensively employed because of their reliable, fast, and precise motions although they are subject to large time delays like many engineering systems. The time delay may lead to the lack of high precision required and even catastrophic instability. Using common control approaches on such delay systems can cause poor control performance, and uncompensated input delays can produce hazards when used in engineering applications. Therefore, destabilizing time delays need to be regarded in designing control law. First, delay-free dynamic equations are derived using the Lagrangian method. Then, we formulate a predictor-based controller for a 7-DOF Baxter manipulator, in the presence of input delay, in order to track desirable trajectories. Finally, the results are experimentally evaluated.


Author(s):  
Shriya A. Hande ◽  
Nitin R. Chopde

<p>In today’s world, in almost all sectors, most of the work is done by robots or robotic arm having different number of degree of freedoms (DOF’s) as per the requirement. This project deals with the Design and Implementation of a “Wireless Gesture Controlled Robotic Arm with Vision”. The system design is divided into 3 parts namely: Accelerometer Part, Robotic Arm and Platform. It is fundamentally an Accelerometer based framework which controls a Robotic Arm remotely utilizing a, little and minimal effort, 3-pivot (DOF's) accelerometer by means of RF signals. The Robotic Arm is mounted over a versatile stage which is likewise controlled remotely by another accelerometer. One accelerometer is mounted/joined on the human hand, catching its conduct (motions and stances) and hence the mechanical arm moves in like manner and the other accelerometer is mounted on any of the leg of the client/administrator, catching its motions and stances and in this way the stage moves as needs be. In a nutshell, the robotic arm and platform is synchronised with the gestures and postures of the hand and leg of the user / operator, respectively. The different motions performed by robotic arm are: PICK and PLACE / DROP, RAISING and LOWERING the objects. Also, the motions performed by the platform are: FORWARD, BACKWARD, RIGHT and LEFT.</p>


Sign in / Sign up

Export Citation Format

Share Document