An Evaluation of Inanimate and Virtual Reality Training for Psychomotor Skill Development in Robot-Assisted Minimally Invasive Surgery

2020 ◽  
Vol 2 (2) ◽  
pp. 118-129
Author(s):  
Guido Caccianiga ◽  
Andrea Mariani ◽  
Elena De Momi ◽  
Gabriela Cantarero ◽  
Jeremy D. Brown
2005 ◽  
Vol 1281 ◽  
pp. 521-526 ◽  
Author(s):  
M. Owsijewitsch ◽  
A. Pommert ◽  
K.H. Höhne ◽  
U. Schumacher ◽  
T. Buerger ◽  
...  

Author(s):  
Hang Su ◽  
Andrea Mariani ◽  
Salih Ertug Ovur ◽  
Arianna Menciassi ◽  
Giancarlo Ferrigno ◽  
...  

Author(s):  
Wen Qi ◽  
Hang Su ◽  
Ke Fan ◽  
Ziyang Chen ◽  
Jiehao Li ◽  
...  

The generous application of robot-assisted minimally invasive surgery (RAMIS) promotes human-machine interaction (HMI). Identifying various behaviors of doctors can enhance the RAMIS procedure for the redundant robot. It bridges intelligent robot control and activity recognition strategies in the operating room, including hand gestures and human activities. In this paper, to enhance identification in a dynamic situation, we propose a multimodal data fusion framework to provide multiple information for accuracy enhancement. Firstly, a multi-sensors based hardware structure is designed to capture varied data from various devices, including depth camera and smartphone. Furthermore, in different surgical tasks, the robot control mechanism can shift automatically. The experimental results evaluate the efficiency of developing the multimodal framework for RAMIS by comparing it with a single sensor system. Implementing the KUKA LWR4+ in a surgical robot environment indicates that the surgical robot systems can work with medical staff in the future.


Sign in / Sign up

Export Citation Format

Share Document