scholarly journals Tactile Perception Technologies and Their Applications in Minimally Invasive Surgery: A Review

2020 ◽  
Vol 11 ◽  
Author(s):  
Chao Huang ◽  
Qizhuo Wang ◽  
Mingfu Zhao ◽  
Chunyan Chen ◽  
Sinuo Pan ◽  
...  

Minimally invasive surgery (MIS) has been the preferred surgery approach owing to its advantages over conventional open surgery. As a major limitation, the lack of tactile perception impairs the ability of surgeons in tissue distinction and maneuvers. Many studies have been reported on industrial robots to perceive various tactile information. However, only force data are widely used to restore part of the surgeon’s sense of touch in MIS. In recent years, inspired by image classification technologies in computer vision, tactile data are represented as images, where a tactile element is treated as an image pixel. Processing raw data or features extracted from tactile images with artificial intelligence (AI) methods, including clustering, support vector machine (SVM), and deep learning, has been proven as effective methods in industrial robotic tactile perception tasks. This holds great promise for utilizing more tactile information in MIS. This review aims to provide potential tactile perception methods for MIS by reviewing literatures on tactile sensing in MIS and literatures on industrial robotic tactile perception technologies, especially AI methods on tactile images.

2020 ◽  
Vol 7 (7) ◽  
pp. 2103
Author(s):  
Yoshihisa Matsunaga ◽  
Ryoichi Nakamura

Background: Abdominal cavity irrigation is a more minimally invasive surgery than that using a gas. Minimally invasive surgery improves the quality of life of patients; however, it demands higher skills from the doctors. Therefore, the study aimed to reduce the burden by assisting and automating the hemostatic procedure a highly frequent procedure by taking advantage of the clearness of the endoscopic images and continuous bleeding point observations in the liquid. We aimed to construct a method for detecting organs, bleeding sites, and hemostasis regions.Methods: We developed a method to perform real-time detection based on machine learning using laparoscopic videos. Our training dataset was prepared from three experiments in pigs. Linear support vector machine was applied using new color feature descriptors. In the verification of the accuracy of the classifier, we performed five-part cross-validation. Classification processing time was measured to verify the real-time property. Furthermore, we visualized the time series class change of the surgical field during the hemostatic procedure.Results: The accuracy of our classifier was 98.3% and the processing cost to perform real-time was enough. Furthermore, it was conceivable to quantitatively indicate the completion of the hemostatic procedure based on the changes in the bleeding region by ablation and the hemostasis regions by tissue coagulation.Conclusions: The organs, bleeding sites, and hemostasis regions classification was useful for assisting and automating the hemostatic procedure in the liquid. Our method can be adapted to more hemostatic procedures. 


Author(s):  
Kenoki Ohuchida ◽  
Makoto Hashizume

Recently, a robotic system was developed in the biomedical field to support minimally invasive surgery. The popularity of minimally invasive surgery has surged rapidly because of endoscopic procedures. In endoscopic surgery, surgical procedures are performed within a body cavity and visualized with laparoscopy or thoracoscopy. Since the initial laparoscopic cholecystectomy was performed in 1987, the implications for endoscopic procedures have continuously expanded, and endoscopic surgery is currently the standard for an increasing number of operations. Advances in laparoscopic surgery have led to less postoperative pain, shorter hospital stays, and an earlier return to work for many patients. However, performing laparoscopic procedures requires several skills that have never been required for conventional open surgery. The surgeon needs to coordinate his/her eyes and hands and acquire a skillful manner using long-shaft instruments as well as mentally interpret a two-dimensional environment as a three-dimensional one. Because learning such skills is stressful for most surgeons, performing a laparoscopic procedure is more physically and mentally demanding than performing an open procedure.


1998 ◽  
Vol 114 ◽  
pp. A1408
Author(s):  
M. MacFarlane ◽  
J. Rosen ◽  
B. Hannaford ◽  
C. Pellegrini ◽  
M. Sinanan

Spine ◽  
2017 ◽  
Vol 42 (10) ◽  
pp. 789-797 ◽  
Author(s):  
Nils Hansen-Algenstaedt ◽  
Mun Keong Kwan ◽  
Petra Algenstaedt ◽  
Chee Kidd Chiu ◽  
Lennart Viezens ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5412
Author(s):  
Gábor Lajkó ◽  
Renáta Nagyné Nagyné Elek ◽  
Tamás Haidegger

Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic—acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors—or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU–ISI Gesture and Skill Assessment Working Set dataset—co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy—averaged over five cross-validation trials—for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features—such as image-based orientation and image-based collision detection—or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.


2004 ◽  
Vol 171 (4S) ◽  
pp. 448-448
Author(s):  
Farjaad M. Siddiq ◽  
Patrick Villicana ◽  
Raymond J. Leveillee

Sign in / Sign up

Export Citation Format

Share Document