Research on Robot Teaching for Complex Task

Author(s):  
Lingtao Huang ◽  
JinSong Yang ◽  
Shui Ni ◽  
Bin Wang ◽  
Hongyan Zhang
Keyword(s):  
1959 ◽  
Author(s):  
J. S. Kidd ◽  
Robert G. Kinkade
Keyword(s):  

2012 ◽  
Author(s):  
Xiaochen Yuan ◽  
Joseph Shum ◽  
Kimberly Langer ◽  
Mark Hancock ◽  
Jonathan Histon

2017 ◽  
Vol 12 (1) ◽  
pp. 83-88
Author(s):  
O.V. Darintsev ◽  
A.B. Migranov

In this paper, various variants of decomposition of tasks in a group of robots using cloud computing technologies are considered. The specifics of the field of application (teams of robots) and solved problems are taken into account. In the process of decomposition, the solution of one large problem is divided into a solution of a series of smaller, simpler problems. Three ways of decomposition based on linear distribution, swarm interaction and synthesis of solutions are proposed. The results of experimental verification of the developed decomposition algorithms are presented, the working capacity of methods for planning trajectories in the cloud is shown. The resulting solution is a component of the complex task of building effective teams of robots.


Author(s):  
John Oberdiek

Chapter 2 takes up the complex task of formulating a conception of risk that can meet the twin desiderata of practicality and normativity. Though neither an unreconstructed subjective nor objective account of risk can, on its own, play the role we need it to play in a moral context, the accounts can be combined to take advantage of their respective strengths. Much of the chapter is therefore devoted to explaining how to overcome this recalibrated perspective-indifference. The chapter defends the perspective of a particular interpretation of the reasonable person, well-known from tort law, as a way of bringing determinacy to the characterization of risk. Defending this evidence-relative perspective while criticizing competing belief- and fact-relative perspectives, the chapter argues that it has the resources to meet the twin desiderata of practicality and normativity.


Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 66
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha

Object detection in uncrewed aerial vehicle (UAV) images has been a longstanding challenge in the field of computer vision. Specifically, object detection in drone images is a complex task due to objects of various scales such as humans, buildings, water bodies, and hills. In this paper, we present an implementation of ensemble transfer learning to enhance the performance of the base models for multiscale object detection in drone imagery. Combined with a test-time augmentation pipeline, the algorithm combines different models and applies voting strategies to detect objects of various scales in UAV images. The data augmentation also presents a solution to the deficiency of drone image datasets. We experimented with two specific datasets in the open domain: the VisDrone dataset and the AU-AIR Dataset. Our approach is more practical and efficient due to the use of transfer learning and two-level voting strategy ensemble instead of training custom models on entire datasets. The experimentation shows significant improvement in the mAP for both VisDrone and AU-AIR datasets by employing the ensemble transfer learning method. Furthermore, the utilization of voting strategies further increases the 3reliability of the ensemble as the end-user can select and trace the effects of the mechanism for bounding box predictions.


Author(s):  
Michal Kafri ◽  
Patrice L. Weiss ◽  
Gabriel Zeilig ◽  
Moshe Bondi ◽  
Ilanit Baum-Cohen ◽  
...  

Abstract Background Virtual reality (VR) enables objective and accurate measurement of behavior in ecologically valid and safe environments, while controlling the delivery of stimuli and maintaining standardized measurement protocols. Despite this potential, studies that compare virtual and real-world performance of complex daily activities are scarce. This study aimed to compare cognitive strategies and gait characteristics of young and older healthy adults as they engaged in a complex task while navigating in a real shopping mall and a high-fidelity virtual replica of the mall. Methods Seventeen older adults (mean (SD) age = 71.2 (5.6) years, 64% males) and 17 young adults (26.7 (3.7) years, 35% males) participated. In two separate sessions they performed the Multiple Errands Test (MET) in a real-world mall or the Virtual MET (VMET) in the virtual environment. The real-world environment was a small shopping area and the virtual environment was created within the CAREN™ (Computer Assisted Rehabilitation Environment) Integrated Reality System. The performance of the task was assessed using motor and physiological measures (gait parameters and heart rate), MET or VMET time and score, and navigation efficiency (cognitive performance and strategy). Between (age groups) and within (environment) differences were analyzed with ANOVA repeated measures. Results There were no significant age effects for any of the gait parameters but there were significant environment effects such that both age groups walked faster (F(1,32) = 154.96, p < 0.0001) with higher step lengths (F(1,32) = 86.36, p < 0.0001), had lower spatial and temporal gait variability (F(1,32) = 95.71–36.06, p < 0.0001) and lower heart rate (F(1,32) = 13.40, p < 0.01) in the real-world. There were significant age effects for MET/VMET scores (F(1,32) = 19.77, p < 0.0001) and total time (F(1,32) = 11.74, p < 0.05) indicating better performance of the younger group, and a significant environment effect for navigation efficiency (F(1,32) = 7.6, p < 0.01) that was more efficient in the virtual environment. Conclusions This comprehensive, ecological approach in the measurement of performance during tasks reminiscent of complex life situations showed the strengths of using virtual environments in assessing cognitive aspects and limitations of assessing motor aspects of performance. Difficulties by older adults were apparent mainly in the cognitive aspects indicating a need to evaluate them during complex task performance.


Sign in / Sign up

Export Citation Format

Share Document