manipulation planning
Recently Published Documents


TOTAL DOCUMENTS

161
(FIVE YEARS 40)

H-INDEX

22
(FIVE YEARS 3)

Author(s):  
Nan Lin ◽  
Yuxuan Li ◽  
Keke Tang ◽  
Yujun Zhu ◽  
Ruolin Wang ◽  
...  

2021 ◽  
Vol 11 (19) ◽  
pp. 9103
Author(s):  
Ang Zhang ◽  
Keisuke Koyama ◽  
Weiwei Wan ◽  
Kensuke Harada

Robotic manipulation of a bulky object is challenging due to the limited kinematics and payload of the manipulator. In this study, a robot realizes the manipulation of general-shaped bulky objects utilizing the contact with the environment. We propose a hierarchical manipulation planner that effectively combined three manipulation styles, namely, pivoting, tumbling, and regrasping. In our proposed method, we first generate a set of superimposed planar segments on the object surface to obtain an object pose in stable contact with the table, and a set of points on the object surface for the end-effectors (EEFs) of a dual-arm manipulator to stably grasp the object. Object manipulation can be realized by solving a graph, considering the kinematic constraints of pivoting and tumbling. For pivoting, we consider two supporting styles: stable support (SP) and unstable support (USP). Our proposed method manipulates large and heavy objects by selectively using the two different support styles of pivoting and tumbling according to the conditions on the table area. In addition, it can effectively avoid the limitation arising due to the arm kinematics by regrasping the object. We experimentally demonstrate that a dual-arm manipulator can move an object from the initial to goal position within a limited area on the table, avoiding obstacles placed on the table.


2021 ◽  
Author(s):  
Arindam B. Chowdhury ◽  
Juncheng Li ◽  
David J. Cappelleri

Abstract In this paper, we present two distinct neural network-based pose estimation approaches for mobile manipulation in factory environments. Synthetic datasets, unique to the factory setting, are created for neural network training in each approach. Approach I uses a CNN in conjunction with RBG and depth images. Approach II uses the DOPE network along with RGB images, CAD dimensions of the objects of interest, and the PnP algorithm. Each approach is evaluated and compared across pipeline complexity, dataset preparation resources, robustness, platform and run-time resources, and pose accuracy for manipulation planning. Finally, recommendations for when to use each method are provided.


2021 ◽  
Author(s):  
Fahad Islam ◽  
Chris Paxton ◽  
Clemens Eppner ◽  
Bryan Peele ◽  
Maxim Likhachev ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2280
Author(s):  
Ching-Chang Wong ◽  
Li-Yu Yeh ◽  
Chih-Cheng Liu ◽  
Chi-Yi Tsai ◽  
Hisasuki Aoyama

In this paper, a manipulation planning method for object re-orientation based on semantic segmentation keypoint detection is proposed for robot manipulator which is able to detect and re-orientate the randomly placed objects to a specified position and pose. There are two main parts: (1) 3D keypoint detection system; and (2) manipulation planning system for object re-orientation. In the 3D keypoint detection system, an RGB-D camera is used to obtain the information of the environment and can generate 3D keypoints of the target object as inputs to represent its corresponding position and pose. This process simplifies the 3D model representation so that the manipulation planning for object re-orientation can be executed in a category-level manner by adding various training data of the object in the training phase. In addition, 3D suction points in both the object’s current and expected poses are also generated as the inputs of the next operation stage. During the next stage, Mask Region-Convolutional Neural Network (Mask R-CNN) algorithm is used for preliminary object detection and object image. The highest confidence index image is selected as the input of the semantic segmentation system in order to classify each pixel in the picture for the corresponding pack unit of the object. In addition, after using a convolutional neural network for semantic segmentation, the Conditional Random Fields (CRFs) method is used to perform several iterations to obtain a more accurate result of object recognition. When the target object is segmented into the pack units of image process, the center position of each pack unit can be obtained. Then, a normal vector of each pack unit’s center points is generated by the depth image information and pose of the object, which can be obtained by connecting the center points of each pack unit. In the manipulation planning system for object re-orientation, the pose of the object and the normal vector of each pack unit are first converted into the working coordinate system of the robot manipulator. Then, according to the current and expected pose of the object, the spherical linear interpolation (Slerp) algorithm is used to generate a series of movements in the workspace for object re-orientation on the robot manipulator. In addition, the pose of the object is adjusted on the z-axis of the object’s geodetic coordinate system based on the image features on the surface of the object, so that the pose of the placed object can approach the desired pose. Finally, a robot manipulator and a vacuum suction cup made by the laboratory are used to verify that the proposed system can indeed complete the planned task of object re-orientation.


2021 ◽  
pp. 027836492199279
Author(s):  
Roya Sabbagh Novin ◽  
Amir Yazdani ◽  
Andrew Merryweather ◽  
Tucker Hermans

Assistive robots designed for physical interaction with objects will play an important role in assisting with mobility and fall prevention in healthcare facilities. Autonomous mobile manipulation presents a hurdle prior to safely using robots in real-life applications. In this article, we introduce a mobile manipulation framework based on model predictive control using learned dynamics models of objects. We focus on the specific problem of manipulating legged objects such as those commonly found in healthcare environments and personal dwellings (e.g., walkers, tables, chairs). We describe a probabilistic method for autonomous learning of an approximate dynamics model for these objects. In this method, we learn dynamic parameters using a small dataset consisting of force and motion data from interactions between the robot and object. Moreover, we account for multiple manipulation strategies by formulating manipulation planning as a mixed-integer convex optimization. The proposed framework considers the hybrid control system composed of (i) choosing which leg to grasp and (ii) control of continuous applied forces for manipulation. We formalize our algorithm based on model predictive control to compensate for modeling errors and find an optimal path to manipulate the object from one configuration to another. We present results for several objects with various wheel configurations. Simulation and physical experiments show that the obtained dynamics models are sufficiently accurate for safe and collision-free manipulation. When combined with the proposed manipulation planning algorithm, the robot successfully moves the object to the desired pose while avoiding any collision.


Sign in / Sign up

Export Citation Format

Share Document