Accurate instance segmentation of surgical instruments in robotic surgery: model refinement and cross-dataset evaluation

Author(s):  
Xiaowen Kong ◽  
Yueming Jin ◽  
Qi Dou ◽  
Ziyi Wang ◽  
Zerui Wang ◽  
...  
2020 ◽  
Vol 6 (3) ◽  
pp. 571-574
Author(s):  
Anna Schaufler ◽  
Alfredo Illanes ◽  
Ivan Maldonado ◽  
Axel Boese ◽  
Roland Croner ◽  
...  

AbstractIn robot-assisted procedures, the surgeon controls the surgical instruments from a remote console, while visually monitoring the procedure through the endoscope. There is no haptic feedback available to the surgeon, which impedes the assessment of diseased tissue and the detection of hidden structures beneath the tissue, such as vessels. Only visual clues are available to the surgeon to control the force applied to the tissue by the instruments, which poses a risk for iatrogenic injuries. Additional information on haptic interactions of the employed instruments and the treated tissue that is provided to the surgeon during robotic surgery could compensate for this deficit. Acoustic emissions (AE) from the instrument/tissue interactions, transmitted by the instrument are a potential source of this information. AE can be recorded by audio sensors that do not have to be integrated into the instruments, but that can be modularly attached to the outside of the instruments shaft or enclosure. The location of the sensor on a robotic system is essential for the applicability of the concept in real situations. While the signal strength of the acoustic emissions decreases with distance from the point of interaction, an installation close to the patient would require sterilization measures. The aim of this work is to investigate whether it is feasible to install the audio sensor in non-sterile areas far away from the patient and still be able to receive useful AE signals. To determine whether signals can be recorded at different potential mounting locations, instrument/tissue interactions with different textures were simulated in an experimental setup. The results showed that meaningful and valuable AE can be recorded in the non-sterile area of a robotic surgical system despite the expected signal losses.


2016 ◽  
Vol 38 (2) ◽  
pp. 143-146 ◽  
Author(s):  
Yuhei Saito ◽  
Hiroshi Yasuhara ◽  
Satoshi Murakoshi ◽  
Takami Komatsu ◽  
Kazuhiko Fukatsu ◽  
...  

BACKGROUNDRecently, robotic surgery has been introduced in many hospitals. The structure of robotic instruments is so complex that updating their cleaning methods is a challenge for healthcare professionals. However, there is limited information on the effectiveness of cleaning for instruments for robotic surgery.OBJECTIVETo determine the level of residual contamination of instruments for robotic surgery and to develop a method to evaluate the cleaning efficacy for complex surgical devices.METHODSSurgical instruments were collected immediately after operations and/or after in-house cleaning, and the level of residual protein was measured. Three serial measurements were performed on instruments after cleaning to determine the changes in the level of contamination and the total amount of residual protein. The study took place from September 1, 2013, through June 30, 2015, in Japan.RESULTSThe amount of protein released from robotic instruments declined exponentially. The amount after in-house cleaning was 650, 550, and 530 µg/instrument in the 3 serial measurements. The overall level of residual protein in each measurement was much higher for robotic instruments than for ordinary instruments (P<.0001).CONCLUSIONSOur data demonstrated that complete removal of residual protein from surgical instruments is virtually impossible. The pattern of decline differed depending on the instrument type, which reflected the complex structure of the instruments. It might be necessary to establish a new standard for cleaning using a novel classification according to the structural complexity of instruments, especially for those for robotic surgery.Infect Control Hosp Epidemiol 2017;38:143–146


2020 ◽  
Vol 9 (6) ◽  
pp. 1964
Author(s):  
Dongheon Lee ◽  
Hyeong Won Yu ◽  
Hyungju Kwon ◽  
Hyoun-Joong Kong ◽  
Kyu Eun Lee ◽  
...  

As the number of robotic surgery procedures has increased, so has the importance of evaluating surgical skills in these techniques. It is difficult, however, to automatically and quantitatively evaluate surgical skills during robotic surgery, as these skills are primarily associated with the movement of surgical instruments. This study proposes a deep learning-based surgical instrument tracking algorithm to evaluate surgeons’ skills in performing procedures by robotic surgery. This method overcame two main drawbacks: occlusion and maintenance of the identity of the surgical instruments. In addition, surgical skill prediction models were developed using motion metrics calculated from the motion of the instruments. The tracking method was applied to 54 video segments and evaluated by root mean squared error (RMSE), area under the curve (AUC), and Pearson correlation analysis. The RMSE was 3.52 mm, the AUC of 1 mm, 2 mm, and 5 mm were 0.7, 0.78, and 0.86, respectively, and Pearson’s correlation coefficients were 0.9 on the x-axis and 0.87 on the y-axis. The surgical skill prediction models showed an accuracy of 83% with Objective Structured Assessment of Technical Skill (OSATS) and Global Evaluative Assessment of Robotic Surgery (GEARS). The proposed method was able to track instruments during robotic surgery, suggesting that the current method of surgical skill assessment by surgeons can be replaced by the proposed automatic and quantitative evaluation method.


Author(s):  
Thomas Kurmann ◽  
Pablo Márquez-Neila ◽  
Max Allan ◽  
Sebastian Wolf ◽  
Raphael Sznitman

Abstract Purpose The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type. Methods We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder–decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots. Results Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods. Conclusions We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance.


2008 ◽  
Author(s):  
Hermann Mayer ◽  
Darius Burschka ◽  
Alois Knoll ◽  
Eva Ulla Braun ◽  
R�diger Lange ◽  
...  

At the German Heart Center Munich we have installed and evaluated a novel system for robotic surgery. Its main features are the incorporation of haptics (by means of strain gauge sensors at the instruments) and partial automation of surgical tasks. However, in this paper we focus on the software engineering aspects of the system. We present a hierarchical approach, which is inspired by the modular architecture of the hardware. Each component of the system, and therefore each component of the control software can be easily interchanged by another instance (e.g. different types of robots may be employed to carry the surgical instruments). All operations are abstracted by an intuitive user interface, which provides a high level of transparency. In addition, we have included techniques known from character animation (so called key-framing) in order to enable operation of the system by users with non-engineering backgrounds. The introduced concepts have proven effective during an extensive evaluation with 30 surgeons. Thereby, the system was used to conduct simplified operations in the field of heart surgery, including the replacement of a papillary tendon and the occlusion of an atrial septal defect.


2011 ◽  
Vol 5 (5) ◽  
pp. 738-745
Author(s):  
Tsubasa Yonemura ◽  
◽  
Yasuhide Kozuka ◽  
Young Min Baek ◽  
Naohiko Sugita ◽  
...  

Performing microsurgery in the field of neurosurgery is very challenging because neurosurgeons have to suture fine vessels by maneuvering long, thin surgical instruments inserted through a small hole in the skull. In order to assist neurosurgeons, a novel master-slave surgical robotic system has been developed. The objective of the surgical robotic system is to assist neurosurgeons in performing micro surgery in deep surgical fields by providing high dexterity. However, a method of correspondence between master and slave manipulators has not yet been studied, though this is strongly related to the operability and usability of robotic surgery. In this paper, we propose two pose correspondence methods for the master and slave manipulators, axis-based relative pose correspondence and vector-based absolute pose correspondence, and their usability and operability are verified by performing pointing and suturing tasks. The experimental results show that there is a trade-off between the two correspondence methods in terms of time, length of trajectory, and the singular point problem.


Sign in / Sign up

Export Citation Format

Share Document