Understanding the challenges and needs of knee arthroscopy surgeons to inform the design of surgical robots

Author(s):  
Jeremy Opie ◽  
Anjali Jaiprakash ◽  
Bernd Ploderer ◽  
Ross Crawford ◽  
Margot Brereton ◽  
...  
2001 ◽  
Vol 17 (8) ◽  
pp. 878-883
Author(s):  
William M. Wind ◽  
Brian E. McGrath ◽  
Eugene R. Mindell
Keyword(s):  

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Lukas Theisgen ◽  
Florian Strauch ◽  
Matías de la Fuente ◽  
Klaus Radermacher

AbstractRisk classes defined by MDR and FDA for state-of-the-art surgical robots based on their intended use are not suitable as indicators for their hazard potential. While there is a lack of safety regulation for an increasing degree of automation as well as the degree of invasiveness into the patient’s body, adverse events have increased in the last decade. Thus, an outright identification of hazards as part of the risk analysis over the complete development process and life cycle of a surgical robot is crucial, especially when introducing new technologies. For this reason, we present a comprehensive approach for hazard identification in early phases of development. With this multi-perspective approach, the number of hazards identified can be increased. Furthermore, a generic catalogue of hazards for surgical robots has been established by categorising the results. The catalogue serves as a data pool for risk analyses and holds the potential to reduce hazards through safety measures already in the design process before becoming risks for the patient.


Author(s):  
Martin Wagner ◽  
Andreas Bihlmaier ◽  
Hannes Götz Kenngott ◽  
Patrick Mietkowski ◽  
Paul Maria Scheikl ◽  
...  

Abstract Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs.


Sign in / Sign up

Export Citation Format

Share Document