scholarly journals Multi-Depth Fusion Network for Whole-Heart CT Image Segmentation

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 23421-23429 ◽  
Author(s):  
Chengqin Ye ◽  
Wei Wang ◽  
Shanzhuo Zhang ◽  
Kuanquan Wang
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 14579-14587
Author(s):  
Wei Wang ◽  
Chengqin Ye ◽  
Shanzhuo Zhang ◽  
Yong Xu ◽  
Kuanquan Wang

2021 ◽  
Vol 91 ◽  
pp. 107024 ◽  
Author(s):  
Xiwang Xie ◽  
Weidong Zhang ◽  
Huadeng Wang ◽  
Lingqiao Li ◽  
Zhengyun Feng ◽  
...  

2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


Author(s):  
H.-F. Lee ◽  
P.-C. Huang ◽  
C. Wietholt ◽  
C.-H. Hsu ◽  
K. M. Lin ◽  
...  

2021 ◽  
pp. 420-430
Author(s):  
Youyi Song ◽  
Lequan Yu ◽  
Baiying Lei ◽  
Kup-Sze Choi ◽  
Jing Qin

Author(s):  
Hiromasa Suzuki ◽  
Hiroyuki Hishida ◽  
Takashi Michikawa ◽  
Yutaka Ohtake ◽  
Satoshi Oota ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document