medical segmentation
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 16)

H-INDEX

4
(FIVE YEARS 2)

Author(s):  
Chuchen Li ◽  
Huafeng Liu

Abstract Recent medical image segmentation methods heavily rely on large-scale training data and high-quality annotations. However, these resources are hard to obtain due to the limitation of medical images and professional annotators. How to utilize limited annotations and maintain the performance is an essential yet challenging problem. In this paper, we try to tackle this problem in a self-learning manner by proposing a Generative Adversarial Semi-supervised Network (GASNet). We use limited annotated images as main supervision signals, and the unlabeled images are manipulated as extra auxiliary information to improve the performance. More specifically, we modulate a segmentation network as a generator to produce pseudo labels for unlabeled images. To make the generator robust, we train an uncertainty discriminator with generative adversarial learning to determine the reliability of the pseudo labels. To further ensure dependability, we apply feature mapping loss to obtain statistic distribution consistency between the generated labels and the real labels. Then the verified pseudo labels are used to optimize the generator in a self-learning manner. We validate the effectiveness of the proposed method on right ventricle dataset, Sunnybrook dataset, STACOM, ISIC dataset, and Kaggle lung dataset. We obtain 0.8402 to 0.9121, 0.8103 to 0.9094, 0.9435 to 0.9724, 0.8635 to 0.886, and 0.9697 to 0.9885 dice coefficient with 1/8 to 1/2 proportion of densely annotated labels, respectively. The improvements are up to 28.6 points higher than the corresponding fully supervised baseline.


2021 ◽  
Vol 1 (1) ◽  
pp. 11-13
Author(s):  
Ayush Somani ◽  
Divij Singh ◽  
Dilip Prasad ◽  
Alexander Horsch

We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.


Author(s):  
Riaan Zoetmulder ◽  
Efstratios Gavves ◽  
Matthan Caan ◽  
Henk Marquering

Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 431
Author(s):  
Pravda Jith Ray Prasad ◽  
Shanmugapriya Survarachakan ◽  
Zohaib Amjad Khan ◽  
Frank Lindseth ◽  
Ole Jakob Elle ◽  
...  

Medical image segmentation has gained greater attention over the past decade, especially in the field of image-guided surgery. Here, robust, accurate and fast segmentation tools are important for planning and navigation. In this work, we explore the Convolutional Neural Network (CNN) based approaches for multi-dataset segmentation from CT examinations. We hypothesize that selection of certain parameters in the network architecture design critically influence the segmentation results. We have employed two different CNN architectures, 3D-UNet and VGG-16, given that both networks are well accepted in the medical domain for segmentation tasks. In order to understand the efficiency of different parameter choices, we have adopted two different approaches. The first one combines different weight initialization schemes with different activation functions, whereas the second approach combines different weight initialization methods with a set of loss functions and optimizers. For evaluation, the 3D-UNet was trained with the Medical Segmentation Decathlon dataset and VGG-16 using LiTS data. The quality assessment done using eight quantitative metrics enhances the probability of using our proposed strategies for enhancing the segmentation results. Following a systematic approach in the evaluation of the results, we propose a few strategies that can be adopted for obtaining good segmentation results. Both of the architectures used in this work were selected on the basis of general acceptance in segmentation tasks for medical images based on their promising results compared to other state-of-the art networks. The highest Dice score obtained in 3D-UNet for the liver, pancreas and cardiac data was 0.897, 0.691 and 0.892. In the case of VGG-16, it was solely developed to work with liver data and delivered a Dice score of 0.921. From all the experiments conducted, we observed that two of the combinations with Xavier weight initialization (also known as Glorot), Adam optimiser, Cross Entropy loss (GloCEAdam) and LeCun weight initialization, cross entropy loss and Adam optimiser LecCEAdam worked best for most of the metrics in a 3D-UNet setting, while Xavier together with cross entropy loss and Tanh activation function (GloCEtanh) worked best for the VGG-16 network. Here, the parameter combinations are proposed on the basis of their contributions in obtaining optimal outcomes in segmentation evaluations. Moreover, we discuss that the preliminary evaluation results show that these parameters could later on be used for gaining more insights into model convergence and optimal solutions.The results from the quality assessment metrics and the statistical analysis validate our conclusions and we propose that the presented work can be used as a guide in choosing parameters for the best possible segmentation results for future works.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Adnan Saood ◽  
Iyad Hatem

Abstract Background Currently, there is an urgent need for efficient tools to assess the diagnosis of COVID-19 patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung images of such patients. Two structurally-different deep learning techniques, and , are investigated for semantically segmenting infected tissue regions in CT lung images. Methods We propose to use two known deep learning networks, and , for image tissue classification. is characterized as a scene segmentation network and as a medical segmentation tool. Both networks were exploited as binary segmentors to discriminate between infected and healthy lung tissue, also as multi-class segmentors to learn the infection type on the lung. Each network is trained using seventy-two data images, validated on ten images, and tested against the left eighteen images. Several statistical scores are calculated for the results and tabulated accordingly. Results The results show the superior ability of in classifying infected/non-infected tissues compared to the other methods (with 0.95 mean accuracy), while the shows better results as a multi-class segmentor (with 0.91 mean accuracy). Conclusion Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would not only assist in disease diagnosis, also help in quantifying the severity of the illness, and hence, prioritize the population treatment accordingly. We propose computer-based techniques that prove to be reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today’s pandemic would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally.


Author(s):  
Ngan Le ◽  
Trung Le ◽  
Kashu Yamazaki ◽  
Toan Bui ◽  
Khoa Luu ◽  
...  

2021 ◽  
pp. 573-583
Author(s):  
Lin Wang ◽  
Lie Ju ◽  
Donghao Zhang ◽  
Xin Wang ◽  
Wanji He ◽  
...  

2021 ◽  
Vol 29 (1) ◽  
pp. 15-29
Author(s):  
Osama S. Faragallah ◽  
Ghada Abdel-Aziz ◽  
Walid El-Shafai ◽  
Hala S. El-sayed ◽  
S.F. El-Zoghdy ◽  
...  

2021 ◽  
pp. 255-265
Author(s):  
Ufuk Demir ◽  
Atahan Ozer ◽  
Yusuf H. Sahin ◽  
Gozde Unal

Sign in / Sign up

Export Citation Format

Share Document