laparoscopic videos
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 26)

H-INDEX

10
(FIVE YEARS 5)

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1010
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Abdulaziz Saleh Ba Wazir ◽  
Myles Joshua Toledo Tan ◽  
Mohammad Faizal Ahmad Fauzi

Background: Laparoscopy is a surgery performed in the abdomen without making large incisions in the skin and with the aid of a video camera, resulting in laparoscopic videos. The laparoscopic video is prone to various distortions such as noise, smoke, uneven illumination, defocus blur, and motion blur. One of the main components in the feedback loop of video enhancement systems is distortion identification, which automatically classifies the distortions affecting the videos and selects the video enhancement algorithm accordingly. This paper aims to address the laparoscopic video distortion identification problem by developing fast and accurate multi-label distortion classification using a deep learning model. Current deep learning solutions based on convolutional neural networks (CNNs) can address laparoscopic video distortion classification, but they learn only spatial information. Methods: In this paper, utilization of both spatial and temporal features in a CNN-long short-term memory (CNN-LSTM) model is proposed as a novel solution to enhance the classification. First, pre-trained ResNet50 CNN was used to extract spatial features from each video frame by transferring representation from large-scale natural images to laparoscopic images. Next, LSTM was utilized to consider the temporal relation between the features extracted from the laparoscopic video frames to produce multi-label categories. A novel laparoscopic video dataset proposed in the ICIP2020 challenge was used for training and evaluation of the proposed method. Results: The experiments conducted show that the proposed CNN-LSTM outperforms the existing solutions in terms of accuracy (85%), and F1-score (94.2%). Additionally, the proposed distortion identification model is able to run in real-time with low inference time (0.15 sec). Conclusions: The proposed CNN-LSTM model is a feasible solution to be utilized in laparoscopic videos for distortion identification.


2021 ◽  
Vol 7 (2) ◽  
pp. 476-479
Author(s):  
Tamer Abdulbaki Alshirbaji ◽  
Nour Aldeen Jalal ◽  
Paul D. Docherty ◽  
Thomas Neumuth ◽  
Knut Moeller

Abstract Accurate recognition of surgical tools is a crucial component in the development of robust, context-aware systems. Recently, deep learning methods have been increasingly adopted to analyse laparoscopic videos. Existing work mainly leverages the ability of convolutional neural networks (CNNs) to model visual information of laparoscopic images. However, the performance was evaluated only on data belonging to the same dataset used for training. A more comprehensive evaluation of CNN performance on data from other datasets can provide a more rigorous assessment of the approaches. In this work, we investigate the generalisation capability of different CNN architectures to classify surgical tools in laparoscopic images recorded at different institutions. This research highlights the need to determine the effect of using data from different surgical sites on CNN generalisability. Experimental results imply that training a CNN model using data from multiple sites improves generalisability to new surgical locations.


2021 ◽  
Vol 68 ◽  
pp. 102801
Author(s):  
Tamer Abdulbaki Alshirbaji ◽  
Nour Aldeen Jalal ◽  
Paul D. Docherty ◽  
Thomas Neumuth ◽  
Knut Möller

Author(s):  
Andreas Leibetseder ◽  
Klaus Schoeffmann ◽  
Jorg Keckstein ◽  
Simon Keckstein
Keyword(s):  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Myles Joshua Toledo Tan ◽  
Jamie Ledesma Fermin

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Tamer Abdulbaki Alshirbaji ◽  
Nour Aldeen Jalal ◽  
Knut Möller

AbstractSurgical tool presence detection in laparoscopic videos is a challenging problem that plays a critical role in developing context-aware systems in operating rooms (ORs). In this work, we propose a deep learning-based approach for detecting surgical tools in laparoscopic images using a convolutional neural network (CNN) in combination with two long short-term memory (LSTM) models. A pre-trained CNN model was trained to learn visual features from images. Then, LSTM was employed to include temporal information through a video clip of neighbour frames. Finally, the second LSTM was utilized to model temporal dependencies across the whole surgical video. Experimental evaluation has been conducted with the Cholec80 dataset to validate our approach. Results show that the most notable improvement is achieved after employing the two-stage LSTM model, and the proposed approach achieved better or similar performance compared with state-of-the-art methods.


2020 ◽  
Vol 6 (3) ◽  
pp. 319-321
Author(s):  
N. Ding ◽  
N. A. Jalal ◽  
T. A. Alshirbaji ◽  
K. Möller

AbstractSurgical tool recognition is a key task to analyze surgical workflow, in order to improve the efficiency and safety of laparoscopic surgeries. The laparoscopic videos are important sources to conduct this task, However, there are some challenges to analyze these videos. Focus on the imbalanced dataset problem, data augmentation method based on generate different synthetic datasets and evaluate their performance training on a convolutional neural network model are investigated in this research. The results show the effect on the model with different background patterns. A better performance was achieved when the model was trained by a structure background dataset. Further research will be needed to understand why the original background patterns support the correct classification. It is assumed that this is an overlearning effect, that will not hold if other procedures were included into the test set.


Sign in / Sign up

Export Citation Format

Share Document