manipulation task
Recently Published Documents


TOTAL DOCUMENTS

163
(FIVE YEARS 31)

H-INDEX

23
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261790
Author(s):  
Giulia Cimarelli ◽  
Julia Schindlbauer ◽  
Teresa Pegger ◽  
Verena Wesian ◽  
Zsófia Virányi

Domestic dogs display behavioural patterns towards their owners that fulfil the four criteria of attachment. As such, they use their owners as a secure base, exploring the environment and manipulating objects more when accompanied by their owners than when alone. Although there are some indications that owners serve as a better secure base than other human beings, the evidence regarding a strong owner-stranger differentiation in a manipulative context is not straightforward. In the present study, we conducted two experiments in which pet dogs were tested in an object-manipulation task in the presence of the owner and of a stranger, varying how the human partner would behave (i.e. remaining silent or encouraging the dog, Experiment 1), and when alone (Experiment 2). Further, to gain a better insight into the mechanisms behind a potential owner-stranger differentiation, we investigated the effect of dogs’ previous life history (i.e. having lived in a shelter or having lived in the same household since puppyhood). Overall, we found that strangers do not provide a secure base effect and that former shelter dogs show a stronger owner-stranger differentiation than other family dogs. As former shelter dogs show more behavioural signs correlated with anxiety towards the novel environment and the stranger, we concluded that having been re-homed does not necessarily affect the likelihood of forming a secure bond with the new owner but might have an impact on how dogs interact with novel stimuli, including unfamiliar humans. These results confirm the owner’s unique role in providing security to their dogs and have practical implications for the bond formation in pet dogs with a past in a shelter.


2021 ◽  
pp. 1-18
Author(s):  
Takeshi D. Itoh ◽  
Koji Ishihara ◽  
Jun Morimoto

Model-based control has great potential for use in real robots due to its high sampling efficiency. Nevertheless, dealing with physical contacts and generating accurate motions are inevitable for practical robot control tasks, such as precise manipulation. For a real-time, model-based approach, the difficulty of contact-rich tasks that requires precise movement lies in the fact that a model needs to accurately predict forthcoming contact events within a limited length of time rather than detect them afterward with sensors. Therefore, in this study, we investigate whether and how neural network models can learn a task-related model useful enough for model-based control, that is, a model predicting future states, including contact events. To this end, we propose a structured neural network model predicting a control (SNN-MPC) method, whose neural network architecture is designed with explicit inertia matrix representation. To train the proposed network, we develop a two-stage modeling procedure for contact-rich dynamics from a limited number of samples. As a contact-rich task, we take up a trackball manipulation task using a physical 3-DoF finger robot. The results showed that the SNN-MPC outperformed MPC with a conventional fully connected network model on the manipulation task.


2021 ◽  
Author(s):  
Ozsel Kilinc ◽  
Giovanni Montana

AbstractMastering robotic manipulation skills through reinforcement learning (RL) typically requires the design of shaped reward functions. Recent developments in this area have demonstrated that using sparse rewards, i.e. rewarding the agent only when the task has been successfully completed, can lead to better policies. However, state-action space exploration is more difficult in this case. Recent RL approaches to learning with sparse rewards have leveraged high-quality human demonstrations for the task, but these can be costly, time consuming or even impossible to obtain. In this paper, we propose a novel and effective approach that does not require human demonstrations. We observe that every robotic manipulation task could be seen as involving a locomotion task from the perspective of the object being manipulated, i.e. the object could learn how to reach a target state on its own. In order to exploit this idea, we introduce a framework whereby an object locomotion policy is initially obtained using a realistic physics simulator. This policy is then used to generate auxiliary rewards, called simulated locomotion demonstration rewards (SLDRs), which enable us to learn the robot manipulation policy. The proposed approach has been evaluated on 13 tasks of increasing complexity, and can achieve higher success rate and faster learning rates compared to alternative algorithms. SLDRs are especially beneficial for tasks like multi-object stacking and non-rigid object manipulation.


2021 ◽  
Vol 1199 (1) ◽  
pp. 012091
Author(s):  
V Bulej ◽  
M Bartoš ◽  
V Tlach ◽  
M Bohušík ◽  
D Wiecek

Abstract The article deals with simulation of visual guided robot (VGR) in offline programming software Fanuc RoboGuide. At the beginning there is a brief description of the Fanuc RoboGuide system. The practical part contains an example of the task where the configuration and demonstration of offline programming of visual guided robot system via industrial camera is presented. The main aim of the work is to practically verify the functionality of the system usable for intelligent handling and assembly workplaces.


2021 ◽  
Vol 11 (13) ◽  
pp. 5959
Author(s):  
Jacopo Aleotti ◽  
Alberto Baldassarri ◽  
Marcello Bonfè ◽  
Marco Carricato ◽  
Davide Chiaravalli ◽  
...  

This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported.


2021 ◽  
pp. 027836492110176
Author(s):  
Walid Amanhoud ◽  
Jacob Hernandez Sanchez ◽  
Mohamed Bouri ◽  
Aude Billard

In industrial or surgical settings, to achieve many tasks successfully, at least two people are needed. To this end, robotic assistance could be used to enable a single person to perform such tasks alone, with the help of robots through direct, shared, or autonomous control. We are interested in four-arm manipulation scenarios, where both feet are used to control two robotic arms via bi-pedal haptic interfaces. The robotic arms complement the tasks of the biological arms, for instance, in supporting and moving an object while working on it (using both hands). To reduce fatigue, cognitive workload, and to ease the execution of the foot manipulation, we propose two types of assistance that can be enabled upon contact with the object (i.e., based on the interaction forces): autonomous-contact force generation and auto-coordination of the robotic arms. The latter relates to controlling both arms with a single foot, once the object is grasped. We designed four (shared) control strategies that are derived from the combinations (absence/presence) of both assistance modalities, and we compared them through a user study (with 12 participants) on a four-arm manipulation task. The results show that force assistance positively improves human–robot fluency in the four-arm task, the ease of use and usefulness; it also reduces the fatigue. Finally, to make the dual-assistance approach the preferred and most successful among the proposed control strategies, delegating the grasping force to the robotic arms is a crucial factor when controlling them both with a single foot.


Author(s):  
Kunpeng Yao ◽  
Dagmar Sternad ◽  
Aude Billard

Many daily tasks involve the collaboration of both hands. Humans dexterously adjust hand poses and modulate the forces exerted by fingers in response to task demands. Hand pose selection has been intensively studied in unimanual tasks, but little work has investigated bimanual tasks. This work examines hand poses selection in a bimanual high-precision screwing task taken from watchmaking. Twenty right-handed subjects dismounted a screw on the watchface with a screwdriver in two conditions. Results showed that although subjects employed similar hand poses across steps within the same experimental conditions, the hand poses differed significantly in the two conditions. In the free-base condition, subjects needed to stabilize the watchface on the table. The role-distribution across hands was strongly influenced by hand dominance: the dominant hand manipulated the tool, whereas the non-dominant hand controlled the additional degrees of freedom that might impair performance. In contrast, in the fixed-base condition, the watchface was stationary. Subjects employed both hands even though single hand would have been sufficient. Importantly, hand poses decoupled the control of task-demanded force and torque across hands through virtual fingers that grouped multiple fingers into functional units. This preference for bimanual over unimanual control strategy could be an effort to reduce variability caused by mechanical couplings and to alleviate intrinsic sensorimotor processing burdens. To afford analysis of this variety of observations, a novel graphical matrix-based representation of the distribution of hand pose combinations was developed. Atypical hand poses that are not documented in extant hand taxonomies are also included.


Author(s):  
Zhenyang Zhu ◽  
Masahiro Toyoura ◽  
Issei Fujishiro ◽  
Kentaro Go ◽  
Kenji Kashiwagi ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Marcos S. Pereira ◽  
Bruno V. Adorno

This paper addresses the integration of task planning and motion control in robotic manipulation, where automatically generated feasible manipulation sequences are executed by a controller that explicitly accounts for the task geometric constraints. To cope with the high dimensionality of the manipulation problem and the complexity of specifying the tasks, we use a multi-layered framework for task and motion planning adapted from the literature. The adapted framework consists of a high-level planner, which generates task plans for linear temporal logic specifications, and a low-level motion controller, based on constrained optimization, that allows to define regions of interest instead of exact locations while being reactive to changes in theworkspace. Thus, there is no low-level motion planning time added to the total planning time. Moreover, since there is no replanning phase due to motion planner failures, the robot actions are generated only once for each task because the search for a plan occurs on a static graph. We evaluated this approach with two pick-and-place tasks with similar complexity to the original framework and showed that the number of plan nodes generated is smaller than the one in the original framework, which implies less total planning time.


Sign in / Sign up

Export Citation Format

Share Document