scholarly journals Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots

Author(s):  
Yixuan Huang ◽  
Michael Bentley ◽  
Tucker Hermans ◽  
Alan Kuntz
Author(s):  
Martin Wagner ◽  
Andreas Bihlmaier ◽  
Hannes Götz Kenngott ◽  
Patrick Mietkowski ◽  
Paul Maria Scheikl ◽  
...  

Abstract Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs.


Author(s):  
Hongbo Ni ◽  
Xingshe Zhou ◽  
Zhiwen Yu ◽  
Daqing Zhang

The vision of pervasive computing is floating into the domain of the household and aims to assist inhabitants (users) to live more conveniently and harmoniously. Due to the dynamic and heterogeneous nature of pervasive computing environments, it is difficult for an average user to obtain right service and information in the right place at the right time. This chapter proposes a context-dependent task approach to address the challenge. The most important component is its task model, which provides an adequate high-level description of user-oriented tasks and their related contexts. Leveraging the model, multiple entities can easily exchange, share, and reuse their knowledge. The conversion of OWL task ontology specifications to the First-Order Logic (FOL) representations is presented. The performance of FOL rule-based deducing in terms of task number, context size, and time is evaluated. Finally, we present a task supporting system (TSS) to aid an inhabitant’s tasks in light of his or her lifestyle and environment conditions in smart home.


Author(s):  
Ke Yan ◽  
Jie Chen ◽  
Wenhao Zhu ◽  
Xin Jin ◽  
Guannan Hu

2018 ◽  
Vol 107 ◽  
pp. 48-60 ◽  
Author(s):  
Henghui Zhu ◽  
Ioannis Ch. Paschalidis ◽  
Michael E. Hasselmo

1998 ◽  
Vol 18 (1) ◽  
pp. 5-25 ◽  
Author(s):  
Robert T. Elliott ◽  
Qingzong Zhang

Author(s):  
Yeon Soon Shin ◽  
Rolando Masís-Obando ◽  
Neggin Keshavarzian ◽  
Riya Dáve ◽  
Kenneth A. Norman

AbstractThe context-dependent memory effect, in which memory for an item is better when the retrieval context matches the original learning context, has proved to be difficult to reproduce in a laboratory setting. In an effort to identify a set of features that generate a robust context-dependent memory effect, we developed a paradigm in virtual reality using two semantically distinct virtual contexts: underwater and Mars environments, each with a separate body of knowledge (schema) associated with it. We show that items are better recalled when retrieved in the same context as the study context; we also show that the size of the effect is larger for items deemed context-relevant at encoding, suggesting that context-dependent memory effects may depend on items being integrated into an active schema.


Author(s):  
Milad S. Malekzadeh ◽  
Danilo Bruno ◽  
Sylvain Calinon ◽  
Thrishantha Nanayakkara ◽  
Darwin G. Caldwell

2020 ◽  
Vol 10 (3) ◽  
pp. 1-26
Author(s):  
Keita Higuchi ◽  
Hiroki Tsuchida ◽  
Eshed Ohn-Bar ◽  
Yoichi Sato ◽  
Kris Kitani

2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Justin Lines ◽  
Kelsey Nation ◽  
Jean-Marc Fellous

The context in which learning occurs is sufficient to reconsolidate stored memories and neuronal reactivation may be crucial to memory consolidation during sleep. The mechanisms of context-dependent and sleep-dependent memory (re)consolidation are unknown but involve the hippocampus. We simulated memory (re)consolidation using a connectionist model of the hippocampus that explicitly accounted for its dorsoventral organization and for CA1 proximodistal processing. Replicating human and rodent (re)consolidation studies yielded the following results. (1) Semantic overlap between memory items and extraneous learning was necessary to explain experimental data and depended crucially on the recurrent networks of dorsal but not ventral CA3. (2) Stimulus-free, sleep-induced internal reactivations of memory patterns produced heterogeneous recruitment of memory items and protected memories from subsequent interference. These simulations further suggested that the decrease in memory resilience when subjects were not allowed to sleep following learning was primarily due to extraneous learning. (3) Partial exposure to the learning context during simulated sleep (i.e., targeted memory reactivation) uniformly increased memory item reactivation and enhanced subsequent recall. Altogether, these results show that the dorsoventral and proximodistal organization of the hippocampus may be important components of the neural mechanisms for context-based and sleep-based memory (re)consolidations.


Sign in / Sign up

Export Citation Format

Share Document