scholarly journals Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery

Author(s):  
Andrea Moglia ◽  
Luca Morelli ◽  
Roberto D’Ischia ◽  
Lorenzo Maria Fatucchi ◽  
Valentina Pucci ◽  
...  

Abstract Background Artificial intelligence (AI) has the potential to enhance patient safety in surgery, and all its aspects, including education and training, will derive considerable benefit from AI. In the present study, deep-learning models were used to predict the rates of proficiency acquisition in robot-assisted surgery (RAS), thereby providing surgical programs directors information on the levels of the innate ability of trainees to facilitate the implementation of flexible personalized training. Methods 176 medical students, without prior experience with surgical simulators, were trained to reach proficiency in five tasks on a virtual simulator for RAS. Ensemble deep neural networks (DNN) models were developed and compared with other ensemble AI algorithms, i.e., random forests and gradient boosted regression trees (GBRT). Results DNN models achieved a higher accuracy than random forests and GBRT in predicting time to proficiency, 0.84 vs. 0.70 and 0.77, respectively (Peg board 2), 0.83 vs. 0.79 and 0.78 (Ring walk 2), 0.81 vs 0.81 and 0.80 (Match board 1), 0.79 vs. 0.75 and 0.71 (Ring and rail 2), and 0.87 vs. 0.86 and 0.84 (Thread the rings 2). Ensemble DNN models outperformed random forests and GBRT in predicting number of attempts to proficiency, with an accuracy of 0.87 vs. 0.86 and 0.83, respectively (Peg board 2), 0.89 vs. 0.88 and 0.89 (Ring walk 2), 0.91 vs. 0.89 and 0.89 (Match board 1), 0.89 vs. 0.87 and 0.83 (Ring and rail 2), and 0.96 vs. 0.94 and 0.94 (Thread the rings 2). Conclusions Ensemble DNN models can identify at an early stage the acquisition rates of surgical technical proficiency of trainees and identify those struggling to reach the required expected proficiency level.

2018 ◽  
Author(s):  
Alexey A. Shvets ◽  
Alexander Rakhlin ◽  
Alexandr A. Kalinin ◽  
Vladimir I. Iglovikov

AbstractSemantic segmentation of robotic instruments is an important problem for the robot-assisted surgery. One of the main challenges is to correctly detect an instrument’s position for the tracking and pose estimation in the vicinity of surgical scenes. Accurate pixel-wise instrument segmentation is needed to address this challenge. In this paper we describe our deep learning-based approach for robotic instrument segmentation. Our approach demonstrates an improvement over the state-of-the-art results using several novel deep neural network architectures. It addressed the binary segmentation problem, where every pixel in an image is labeled as an instrument or background from the surgery video feed. In addition, we solve a multi-class segmentation problem, in which we distinguish between different instruments or different parts of an instrument from the background. In this setting, our approach outperforms other methods for automatic instrument segmentation thereby providing state-of-the-art results for these problems. The source code for our solution is made publicly available.


Surgery ◽  
2020 ◽  
Author(s):  
Francisco Luongo ◽  
Ryan Hakim ◽  
Jessica H. Nguyen ◽  
Animashree Anandkumar ◽  
Andrew J. Hung

2020 ◽  
Author(s):  
Joan Torrent-Sellens ◽  
Ana Jiménez-Zarco ◽  
Francesc Saigí-Rubió

BACKGROUND Increasingly intelligent and autonomous robots are destined to have a huge impact on our society. Their adoption, however, represents a major change to the healthcare sector’s traditional practices, which, in turn, poses certain challenges. To what extent is it possible to foresee a near-future scenario in which minor routine surgery is directed by robots? And what are the patients’ or general public’s perceptions of having surgical procedures performed on them by robots, be it totally or partially? A patient’s trust in robots and AI may facilitate the spread and use of such technologies. OBJECTIVE The goal of our study was to establish the factors that influence how people feel about having a medical operation performed on them by a robot. METHODS We used data from a 2017 Flash Eurobarometer (number 460) of European Commission with 27,901 citizens aged 15 years and over in the 28 countries of the European Union. The research designs and tests a technology acceptance model (TAM). Logistic regression (odds ratios, OR) to model the predictors of trust in robot-assisted surgery was calculated through motivational factors, robots using experience and sociodemographic independent variables. RESULTS The negative relationship between most of the predictors of ease of use, expected benefits and attitude towards robots, and confidence in robot-assisted surgery was contrasted. The only non-sociodemographic predictor variable that has a positive relationship with trust in robots participating in a surgical intervention is previous experience in the use of robots. In this context, we analyze the confidence predictors for three different levels of robot use experience (zero use, average use, and high use). The results obtained indicate that, as the experience of using robots increases, the predictive coefficients related to information, attitude and perception of robots become more negative. Research results also determined that variables of a sociodemographic nature played an important predictive role. It was confirmed that the effect of experience on trust in robots for surgical interventions was greater among men, people between 40 and 54 years old, and those with higher educational levels. CONCLUSIONS Despite the considerable benefits for the patient that the use of robots can bring in a surgical intervention, the results obtained show that trust in robots goes beyond rational decision-making. By contrasting the reasons that generate trust and mistrust in robots, especially by highlighting the experience of use as a key element, the research makes a new contribution to the state of the art and draws practical implications of the use of robots for health policy and practice.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


Author(s):  
Ahmet Haşim Yurttakal ◽  
Hasan Erbay ◽  
Türkan İkizceli ◽  
Seyhan Karaçavuş ◽  
Cenker Biçer

Breast cancer is the most common cancer that progresses from cells in the breast tissue among women. Early-stage detection could reduce death rates significantly, and the detection-stage determines the treatment process. Mammography is utilized to discover breast cancer at an early stage prior to any physical sign. However, mammography might return false-negative, in which case, if it is suspected that lesions might have cancer of chance greater than two percent, a biopsy is recommended. About 30 percent of biopsies result in malignancy that means the rate of unnecessary biopsies is high. So to reduce unnecessary biopsies, recently, due to its excellent capability in soft tissue imaging, Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been utilized to detect breast cancer. Nowadays, DCE-MRI is a highly recommended method not only to identify breast cancer but also to monitor its development, and to interpret tumorous regions. However, in addition to being a time-consuming process, the accuracy depends on radiologists’ experience. Radiomic data, on the other hand, are used in medical imaging and have the potential to extract disease characteristics that can not be seen by the naked eye. Radiomics are hard-coded features and provide crucial information about the disease where it is imaged. Conversely, deep learning methods like convolutional neural networks(CNNs) learn features automatically from the dataset. Especially in medical imaging, CNNs’ performance is better than compared to hard-coded features-based methods. However, combining the power of these two types of features increases accuracy significantly, which is especially critical in medicine. Herein, a stacked ensemble of gradient boosting and deep learning models were developed to classify breast tumors using DCE-MRI images. The model makes use of radiomics acquired from pixel information in breast DCE-MRI images. Prior to train the model, radiomics had been applied to the factor analysis to refine the feature set and eliminate unuseful features. The performance metrics, as well as the comparisons to some well-known machine learning methods, state the ensemble model outperforms its counterparts. The ensembled model’s accuracy is 94.87% and its AUC value is 0.9728. The recall and precision are 1.0 and 0.9130, respectively, whereas F1-score is 0.9545.


Sign in / Sign up

Export Citation Format

Share Document