scholarly journals SP2.1.1Continuous Monitoring and Assessment of Surgical Technical Skills Using Deep Learning

2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Recai Yilmaz ◽  
Alexander Winkler-Schwartz ◽  
Aiden Reich ◽  
Rolando Del Maestro

Abstract Aims Excellent surgical technical skills are of paramount importance to perform surgical procedures, safely and efficiently. Virtual reality surgical simulators can both simulate real operations while providing standardized, risk-free surgical hands-on experience. The integration of AI (artificial intelligence) and virtual reality simulators provides opportunities to carry out comprehensive continuous assessments of surgical performance. We developed and tested a deep learning algorithm which can continuously monitor and assess surgical bimanual performance on virtual reality surgical simulators. Methods Fifty participants from four expertise levels (14 experts/neurosurgeons, 14 senior residents, 10 junior residents, 12 medical students) performed a simulated subpial tumor resection 5 times and a complex simulated brain tumor operation once on the NeuroVR platform. Participants were asked to remove the tumors completely while minimizing bleeding and damage to surrounding tissues employing a simulated ultrasonic aspirator and bipolar forceps. A deep neural network continually tracked the surgical performance utilizing 16 performance metrics generated every 0.2-seconds. Results The deep neural network was successfully trained using neurosurgeons and medical students’ data, learning the composites of expertise comparing high and lower skill levels. The trained algorithm was able to score the technical skills of individuals continuously at 0.2-second intervals. Statistically significant differences in average scores were identified between the 4 groups. Conclusions AI-powered surgical simulators provide continuous assessment of bimanual technical skills during surgery which may further define the composites necessary to train surgical expertise. To our knowledge, this is the first attempt in surgery to continuously assess surgical technical skills using deep learning.

2021 ◽  
Vol 108 (Supplement_1) ◽  
Author(s):  
R Yilmaz ◽  
A Winkler-Schwartz ◽  
N Mirchi ◽  
A Reich ◽  
R Del Maestro

Abstract Introduction Many surgical adverse events occur secondary to technical errors related to poor bimanual skills, fatigue and lack of the required expertise. We developed AI algorithms to continuously assess surgical bimanual technical performance during virtual reality simulated surgical tasks. To our knowledge, this is the first attempt in surgery to train AI algorithms to continuously monitor and evaluate bimanual skills comprehensively. Method Fifty individuals from four expertise levels (14 experts/neurosurgeons, 14 senior residents, 10 junior residents, 12 medical students) performed two virtual reality simulated surgical tasks with haptic feedback: a subpial tumor resection 5 times and a complex, realistically simulated brain tumor operation once. Each task required complete tumor removal while minimizing bleeding and damage to surrounding tissues using a simulated ultrasonic aspirator and a bipolar. A recurrent neural network continually tracked individual bimanual performance utilizing 16 performance metrics generated every 0.2 seconds. Result The recurrent neural network algorithm was successfully trained using neurosurgeons and medical students' data, learning the composites of expertise comparing high and lower skill levels. The trained algorithm outlined and monitored technical skills every 0.2 second continuously organizing performance of each surgical task into three levels: ‘excellent’, ‘average’ and ‘poor’. The percentage time spent on each level was calculated and significant differences found between all four groups for ‘excellent’ and ‘poor’ levels. Conclusion AI-powered surgical simulators provide an advanced assessment and training tool. AI's ability to continuous assess bimanual technical skills during surgery may further define the composites necessary to train surgical expertise. Abbrev AI: artificial intelligence Take-home message By advanced artificial intelligence algorithms surgeon's bi-manual technical skills can be assessed continuously, time periods of poor performance which increase the possibility of errors in performance can be identified.


Author(s):  
A Winkler-Schwartz ◽  
J Fares ◽  
B Khalid ◽  
M Baggiani ◽  
S Christie ◽  
...  

Background: The availability of virtual reality (VR) surgical simulators affords the opportunity to assess the influence of stress on neurosurgical operative performance in a controlled laboratory environment. This study sought to examine the effect of a stressful VR neurosurgical task on the subjective anxiety ratings of participants with varying levels of surgical expertise. Methods: Twenty four participants comprised of six staff neurosurgeons, six senior neurosurgical residents (PGY4-6), six junior neurosurgical residents (PGY1-3), and six senior medical students took part in a bimanual VR tumor removal task with a component of sudden uncontrollable intra-operative bleeding. State Trait Anxiety Inventory (STAI) questionnaires were completed immediately pre and post the stress stimulus. The STAI questionnaire consisted of six items (calm, tense, upset, relaxed, content and worried) measured on a Likert scale. Results: Significant increases in subjective anxiety ratings were noted in junior residents (p=0.005) and medical students (p=0.025) while no significant changes were observed for staff and senior neurosurgical residents. Conclusions: Staff and senior residents more effectively mitigate stress compared to junior colleagues in a VR operative environment. Further physiological correlates are needed to determine whether this increased anxiety is paralleled by physiological arousal and altered surgical performance.


2020 ◽  
pp. 1-14
Author(s):  
Esraa Hassan ◽  
Noha A. Hikal ◽  
Samir Elmuogy

Nowadays, Coronavirus (COVID-19) considered one of the most critical pandemics in the earth. This is due its ability to spread rapidly between humans as well as animals. COVID_19 expected to outbreak around the world, around 70 % of the earth population might infected with COVID-19 in the incoming years. Therefore, an accurate and efficient diagnostic tool is highly required, which the main objective of our study. Manual classification was mainly used to detect different diseases, but it took too much time in addition to the probability of human errors. Automatic image classification reduces doctors diagnostic time, which could save human’s life. We propose an automatic classification architecture based on deep neural network called Worried Deep Neural Network (WDNN) model with transfer learning. Comparative analysis reveals that the proposed WDNN model outperforms by using three pre-training models: InceptionV3, ResNet50, and VGG19 in terms of various performance metrics. Due to the shortage of COVID-19 data set, data augmentation was used to increase the number of images in the positive class, then normalization used to make all images have the same size. Experimentation is done on COVID-19 dataset collected from different cases with total 2623 where (1573 training,524 validation,524 test). Our proposed model achieved 99,046, 98,684, 99,119, 98,90 In terms of Accuracy, precision, Recall, F-score, respectively. The results are compared with both the traditional machine learning methods and those using Convolutional Neural Networks (CNNs). The results demonstrate the ability of our classification model to use as an alternative of the current diagnostic tool.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
Vol 11 (15) ◽  
pp. 7050
Author(s):  
Zeeshan Ahmad ◽  
Adnan Shahid Khan ◽  
Kashif Nisar ◽  
Iram Haider ◽  
Rosilah Hassan ◽  
...  

The revolutionary idea of the internet of things (IoT) architecture has gained enormous popularity over the last decade, resulting in an exponential growth in the IoT networks, connected devices, and the data processed therein. Since IoT devices generate and exchange sensitive data over the traditional internet, security has become a prime concern due to the generation of zero-day cyberattacks. A network-based intrusion detection system (NIDS) can provide the much-needed efficient security solution to the IoT network by protecting the network entry points through constant network traffic monitoring. Recent NIDS have a high false alarm rate (FAR) in detecting the anomalies, including the novel and zero-day anomalies. This paper proposes an efficient anomaly detection mechanism using mutual information (MI), considering a deep neural network (DNN) for an IoT network. A comparative analysis of different deep-learning models such as DNN, Convolutional Neural Network, Recurrent Neural Network, and its different variants, such as Gated Recurrent Unit and Long Short-term Memory is performed considering the IoT-Botnet 2020 dataset. Experimental results show the improvement of 0.57–2.6% in terms of the model’s accuracy, while at the same time reducing the FAR by 0.23–7.98% to show the effectiveness of the DNN-based NIDS model compared to the well-known deep learning models. It was also observed that using only the 16–35 best numerical features selected using MI instead of 80 features of the dataset result in almost negligible degradation in the model’s performance but helped in decreasing the overall model’s complexity. In addition, the overall accuracy of the DL-based models is further improved by almost 0.99–3.45% in terms of the detection accuracy considering only the top five categorical and numerical features.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


Recently, DDoS attacks is the most significant threat in network security. Both industry and academia are currently debating how to detect and protect against DDoS attacks. Many studies are provided to detect these types of attacks. Deep learning techniques are the most suitable and efficient algorithm for categorizing normal and attack data. Hence, a deep neural network approach is proposed in this study to mitigate DDoS attacks effectively. We used a deep learning neural network to identify and classify traffic as benign or one of four different DDoS attacks. We will concentrate on four different DDoS types: Slowloris, Slowhttptest, DDoS Hulk, and GoldenEye. The rest of the paper is organized as follow: Firstly, we introduce the work, Section 2 defines the related works, Section 3 presents the problem statement, Section 4 describes the proposed methodology, Section 5 illustrate the results of the proposed methodology and shows how the proposed methodology outperforms state-of-the-art work and finally Section VI concludes the paper.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Author(s):  
Noor Ahmad ◽  
Muhammad Aminu ◽  
Mohd Halim Mohd Noor

Deep learning approaches have attracted a lot of attention in the automatic detection of Covid-19 and transfer learning is the most common approach. However, majority of the pre-trained models are trained on color images, which can cause inefficiencies when fine-tuning the models on Covid-19 images which are often grayscale. To address this issue, we propose a deep learning architecture called CovidNet which requires a relatively smaller number of parameters. CovidNet accepts grayscale images as inputs and is suitable for training with limited training dataset. Experimental results show that CovidNet outperforms other state-of-the-art deep learning models for Covid-19 detection.


Sign in / Sign up

Export Citation Format

Share Document