Nordic Machine Intelligence
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 17)

H-INDEX

0
(FIVE YEARS 0)

Published By University Of Oslo Library

2703-9196

2021 ◽  
Vol 1 (1) ◽  
pp. 32-34
Author(s):  
Nefeli Panagiota Tzavara ◽  
Bjørn-Jostein Singstad

Colorectal cancer is one of the deadliest and most widespread types of cancer in the world. Colonoscopy is the procedure used to detect and diagnose polyps from the colon, but today's detection rate shows a significant error rate that affects diagnosis and treatment. An automatic image segmentation algorithm may help doctors to improve the detection rate of pathological polyps in the colon. Furthermore, segmenting endoscopic tools in images taken during colonoscopy may contribute towards robotic assisted surgery. In this study, we trained and validated both pre-trained and not pre-trained segmentation models on two different data sets, containing images of polyps and endoscopic tools. Finally, we applied the models on two separate test sets and the best polyp model got a dice score 0.857 and the test instrument model got a dice score 0.948. Moreover, we found that pre-training of the models increased the performance in segmenting polyps and endoscopic tools.


2021 ◽  
Vol 1 (1) ◽  
pp. 5-7
Author(s):  
Adrian Galdran

This paper describes a solution for the MedAI competition, in which participants were required to segment both polyps and surgical instruments from endoscopic images. Our approach relies on a double encoder-decoder neural network which we have previously applied for polyp segmentation, but with a series of enhancements: a more powerful encoder architecture, an improved optimization procedure, and the post-processing of segmentations, based on tempered model ensembling. Experimental results show that our method produces segmentations that show a good agreement with manual delineations provided by medical experts.


2021 ◽  
Vol 1 (1) ◽  
pp. 8-10
Author(s):  
Debayan Bhattacharya ◽  
Christian Betz ◽  
Dennis Eggert ◽  
Alexander Schlaefer

In this paper, we propose Dual Parallel Reverse Attention Edge Network (DPRA-EdgeNet), an architecture that jointly learns to segment an object and its edge. Specifically, the model uses two cascaded partial decoders to form two initial estimates of the object segmentation map and its corresponding edge map. This is followed by a series of object decoders and edge decoders which work in conjunction with dual parallel reverse attention modules. The dual parallel reverse attention (DPRA) modules repeatedly prunes the features at multiple scales to put emphasis on the object segmentation and the edge segmentation respectively. Furthermore, we propose a novel decoder block that uses spatial and channel attention to combine features from the preceding decoder block and reverse attention (RA) modules for object and edge segmentation. We compare our model against popular segmentation models such as U-Net, SegNet and PraNet and demonstrate through a five fold cross validation experiment that our model improves the segmentation accuracy significantly on the Kvasir-SEG dataset and Kvasir-Instrument dataset.


2021 ◽  
Vol 1 (1) ◽  
pp. 41-43
Author(s):  
Sahadev Poudel ◽  
Sang-Woong Lee

In this nutshell, we propose a simple, efficient, and explainable deep learning-based U-Net algorithm for the MedAI challenge, focusing on precise segmentation of polyp and instrument and transparency on algorithms. We develop a straightforward encoder-decoder-based algorithm for the task above. We make an effort to make a simple network as much as possible. Specially, we focus on input resolution and width of the model to find the best optimal settings for the network. We perform ablation studies to cover this aspect.


2021 ◽  
Vol 1 (1) ◽  
pp. 29-31
Author(s):  
Mahmood Haithami ◽  
Amr Ahmed ◽  
Iman Yi Liao ◽  
Hamid Jalab

In this paper, we aim to enhance the segmentation capabilities of DeeplabV3 by employing Gated Recurrent Neural Network (GRU). A 1-by-1 convolution in DeeplabV3 was replaced by GRU after the Atrous Spatial Pyramid Pooling (ASSP) layer to combine the input feature maps. The convolution and GRU have sharable parameters, though, the latter has gates that enable/disable the contribution of each input feature map. The experiments on unseen test sets demonstrate that employing GRU instead of convolution would produce better segmentation results. The used datasets are public datasets provided by MedAI competition.


2021 ◽  
Vol 1 (1) ◽  
pp. 20-22
Author(s):  
Awadelrahman M. A. Ahmed ◽  
Leen A. M. Ali

This paper contributes in automating medical image segmentation by proposing generative adversarial network based models to segment both polyps and instruments in endoscopy images. A main contribution of this paper is providing explanations for the predictions using layer-wise relevance propagation approach, showing which pixels in the input image are more relevant to the predictions. The models achieved 0.46 and 0.70, on Jaccard index and 0.84 and 0.96 accuracy, on the polyp segmentation and the instrument segmentation, respectively.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


2021 ◽  
Vol 1 (1) ◽  
pp. 14-16
Author(s):  
Debapriya Banik ◽  
Kaushiki Roy ◽  
Debotosh Bhattacharjee

This paper addresses the Instrument Segmentation Task, a subtask for the “MedAI: Transparency in Medical Image Segmentation” challenge. To accomplish the subtask, our team “Med_Seg_JU” has proposed a deep learning-based framework, namely “EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames”. The proposed framework is inspired by the M-Net architecture. In this architecture, we have incorporated the EfficientNet B3 module with U-Net as the backbone. Our proposed method obtained a JC of 0.8205, DSC of 0.8632, PRE of 0.8464, REC of 0.9005, F1 of 0.8632, and ACC of 0.9799 as evaluated by the challenge organizers on a separate test dataset. These results justify the efficacy of our proposed method in the segmentation of the surgical instruments.


2021 ◽  
Vol 1 (1) ◽  
pp. 23-25
Author(s):  
Yung-Han Chen ◽  
Pei-Hsuan Kuo ◽  
Yi-Zeng Fang ◽  
Wei-Lin Wang

In this paper, we introduce a multi-model ensemble framework for medical image segmentation. We first collect a set of state-of-the-art models in this field and further improve them through a series of architecture refinement moves and a set of specific training skills. We then integrate these fine-tuned models into a more powerful ensemble framework. Preliminary experiment results show that the proposed multi-model ensemble framework performs well under the given polyp and instrument datasets.


2021 ◽  
Vol 1 (1) ◽  
pp. 11-13
Author(s):  
Ayush Somani ◽  
Divij Singh ◽  
Dilip Prasad ◽  
Alexander Horsch

We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.


Sign in / Sign up

Export Citation Format

Share Document