Journal of Medical Robotics Research
Latest Publications


TOTAL DOCUMENTS

103
(FIVE YEARS 40)

H-INDEX

9
(FIVE YEARS 3)

Published By World Scientific

2424-9068, 2424-905x

Author(s):  
Sai Wang ◽  
Qi He ◽  
Ping Zhang ◽  
Xin Chen ◽  
Siyang Zuo

In this paper, we compared the performance of several neural networks in the classification of early gastric cancer (EGC) images and proposed a method of converting the output value of the network into a calorific value to locate the lesion. The algorithm was improved using transfer learning and fine-tuning principles. The test set accuracy rate reached 0.72, sensitivity reached 0.67, specificity reached 0.77, and precision rate reached 0.78. The experimental results show the potential to meet clinical demands for automatic detection of gastric lesion.


Author(s):  
Mohammad Fattahi Sani ◽  
Raimondo Ascione ◽  
Sanja Dogramadzi

Purpose: Recent developments in robotics and artificial intelligence (AI) have led to significant advances in healthcare technologies enhancing robot-assisted minimally invasive surgery (RAMIS) in some surgical specialties. However, current human–robot interfaces lack intuitive teleoperation and cannot mimic surgeon’s hand/finger sensing required for fine motion micro-surgeries. These limitations make teleoperated robotic surgery not less suitable for, e.g. cardiac surgery and it can be difficult to learn for established surgeons. We report a pilot study showing an intuitive way of recording and mapping surgeon’s gross hand motion and the fine synergic motion during cardiac micro-surgery as a way to enhance future intuitive teleoperation. Methods: We set to develop a prototype system able to train a Deep Neural Network (DNN) by mapping wrist, hand and surgical tool real-time data acquisition (RTDA) inputs during mock-up heart micro-surgery procedures. The trained network was used to estimate the tools poses from refined hand joint angles. Outputs of the network were surgical tool orientation and jaw angle acquired by an optical motion capture system. Results: Based on surgeon’s feedback during mock micro-surgery, the developed wearable system with light-weight sensors for motion tracking did not interfere with the surgery and instrument handling. The wearable motion tracking system used 12 finger/thumb/wrist joint angle sensors to generate meaningful datasets representing inputs of the DNN network with new hand joint angles added as necessary based on comparing the estimated tool poses against measured tool pose. The DNN architecture was optimized for the highest estimation accuracy and the ability to determine the tool pose with the least mean squared error. This novel approach showed that the surgical instrument’s pose, an essential requirement for teleoperation, can be accurately estimated from recorded surgeon’s hand/finger movements with a mean squared error (MSE) less than 0.3%. Conclusion: We have developed a system to capture fine movements of the surgeon’s hand during micro-surgery that could enhance future remote teleoperation of similar surgical tools during micro-surgery. More work is needed to refine this approach and confirm its potential role in teleoperation.


Author(s):  
Yuta Itabashi ◽  
Fumihiko Nakamura ◽  
Hiroki Kajita ◽  
Hideo Saito ◽  
Maki Sugimoto

This work presents a method for identifying surgical field states using time-of-flight (ToF) sensors equipped with a surgical light. It is important to understand the surgical field state in a smart surgical room. In this study, we aimed to identify surgical field states by using 28 ToF sensors with a surgical light installed on each. In the experimental condition, we obtained a sensor dataset by changing the number of people, posture, and movement state of a person under the surgical light. The identification accuracy of the proposed system was evaluated by applying machine learning techniques. This system can be realized simply by attaching ToF sensors to the surface of an existing surgical light.


Author(s):  
Keitaro Yoshida ◽  
Ryo Hachiuma ◽  
Hisako Tomita ◽  
Jingjing Pan ◽  
Kris Kitani ◽  
...  

Author(s):  
Nicholas E. Pacheco ◽  
Joshua B. Gafford ◽  
Mostafa A. Atalla ◽  
Robert J. Webster III ◽  
Loris Fichera

2021 ◽  
pp. 2140003
Author(s):  
Yun-Hsuan Su ◽  
Kevin Huang ◽  
Blake Hannaford

While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [1], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [2], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [3-5]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [6], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [6] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation.


Author(s):  
Daniel Enrique Martinez ◽  
Waiman Meinhold ◽  
John Oshinski ◽  
Ai-Ping Hu ◽  
Jun Ueda
Keyword(s):  

Author(s):  
Emmanouil Dimitrakakis ◽  
George Dwyer ◽  
Lukas Lindenroth ◽  
Petros Giataganas ◽  
Neil L. Dorward ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document