robot learning
Recently Published Documents


TOTAL DOCUMENTS

449
(FIVE YEARS 113)

H-INDEX

29
(FIVE YEARS 5)

2022 ◽  
Vol 73 ◽  
pp. 102231
Author(s):  
Debasmita Mukherjee ◽  
Kashish Gupta ◽  
Li Hsin Chang ◽  
Homayoun Najjaran

2021 ◽  
Author(s):  
Maria Santos ◽  
Udari Madhushani ◽  
Alessia Benevento ◽  
Naomi Ehrich Leonard

2021 ◽  
Vol 33 (5) ◽  
pp. 1063-1074
Author(s):  
Kei Kase ◽  
Noboru Matsumoto ◽  
Tetsuya Ogata ◽  
◽  

Deep robotic learning by learning from demonstration allows robots to mimic a given demonstration and generalize their performance to unknown task setups. However, this generalization ability is heavily affected by the number of demonstrations, which can be costly to manually generate. Without sufficient demonstrations, robots tend to overfit to the available demonstrations and lose the robustness offered by deep learning. Applying the concept of motor babbling – a process similar to that by which human infants move their bodies randomly to obtain proprioception – is also effective for allowing robots to enhance their generalization ability. Furthermore, the generation of babbling data is simpler than task-oriented demonstrations. Previous researches use motor babbling in the concept of pre-training and fine-tuning but have the problem of the babbling data being overwritten by the task data. In this work, we propose an RNN-based robot-control framework capable of leveraging targetless babbling data to aid the robot in acquiring proprioception and increasing the generalization ability of the learned task data by learning both babbling and task data simultaneously. Through simultaneous learning, our framework can use the dynamics obtained from babbling data to learn the target task efficiently. In the experiment, we prepare demonstrations of a block-picking task and aimless-babbling data. With our framework, the robot can learn tasks faster and show greater generalization ability when blocks are at unknown positions or move during execution.


Author(s):  
Mingfei Sun ◽  
Zhenhui Peng ◽  
Meng Xia ◽  
Xiaojuan Ma

AbstractRobot learning from demonstration (RLfD) is a technique for robots to derive policies from instructors’ examples. Although the reciprocal effects of student engagement on teacher behavior are widely recognized in the educational community, it is unclear whether the same phenomenon holds for RLfD. To fill this gap, we first design three types of robot engagement behavior (gaze, imitation, and a hybrid of the two) based on the learning literature. We then conduct, in a simulation environment, a within-subject user study to investigate the impact of different robot engagement cues on humans compared to a “without-engagement” condition. Results suggest that engagement communication has significantly negative influences on the human’s estimation of the simulated robots’ capability and significantly raises their expectation towards the learning outcomes, even though we do not run actual imitation learning algorithms in the experiments. Moreover, imitation behavior affects humans more than gaze does in all metrics, while their combination has the most profound influences on humans. We also find that communicating engagement via imitation or the combined behavior significantly improves humans’ perception towards the quality of simulated demonstrations, even if all demonstrations are of the same quality.


2021 ◽  
pp. 027836492110431
Author(s):  
Brian Reily ◽  
Peng Gao ◽  
Fei Han ◽  
Hua Wang ◽  
Hao Zhang

Awareness of team behaviors (e.g., individual activities and team intents) plays a critical role in human–robot teaming. Autonomous robots need to be aware of the overall intent of the team they are collaborating with in order to effectively aid their human peers or augment the team’s capabilities. Team intents encode the goal of the team, which cannot be simply identified from a collection of individual activities. Instead, teammate relationships must also be encoded for team intent recognition. In this article, we introduce a novel representation learning approach to recognizing team intent awareness in real-time, based upon both individual human activities and the relationship between human peers in the team. Our approach formulates the task of robot learning for team intent recognition as a joint regularized optimization problem, which encodes individual activities as latent variables and represents teammate relationships through graph embedding. In addition, we design a new algorithm to efficiently solve the formulated regularized optimization problem, which possesses a theoretical guarantee to converge to the optimal solution. To evaluate our approach’s performance on team intent recognition, we test our approach on a public benchmark group activity dataset and a multisensory underground search and rescue team behavior dataset newly collected from robots in an underground environment, as well as perform a proof-of-concept case study on a physical robot. The experimental results have demonstrated both the superior accuracy of our proposed approach and its suitability for real-time applications on mobile robots.


Sign in / Sign up

Export Citation Format

Share Document