An Attention-guided Deep Neural Network with Multi-scale Feature Fusion for Liver Vessel Segmentation

Author(s):  
Qingsen Yan ◽  
Bo Wang ◽  
Wei Zhang ◽  
Chuan Luo ◽  
Wei Xu ◽  
...  
2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 622 ◽  
Author(s):  
Xiaoyang Liu ◽  
Wei Jing ◽  
Mingxuan Zhou ◽  
Yuxing Li

Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock into consideration, the multi-scale feature fusion coal-rock recognition (MFFCRR) model based on a multi-scale Completed Local Binary Pattern (CLBP) and a Convolution Neural Network (CNN) is proposed in this paper. Firstly, the multi-scale CLBP features are extracted from coal-rock image samples in the Texture Feature Extraction (TFE) sub-model, which represents texture information of the coal-rock image. Secondly, the high-level deep features are extracted from coal-rock image samples in the Deep Feature Extraction (DFE) sub-model, which represents macroscopic information of the coal-rock image. The texture information and macroscopic information are acquired based on information theory. Thirdly, the multi-scale feature vector is generated by fusing the multi-scale CLBP feature vector and deep feature vector. Finally, multi-scale feature vectors are input to the nearest neighbor classifier with the chi-square distance to realize coal-rock recognition. Experimental results show the coal-rock image recognition accuracy of the proposed MFFCRR model reaches 97.9167%, which increased by 2%–3% compared with state-of-the-art coal-rock recognition methods.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 208969-208977
Author(s):  
Meiyan Liang ◽  
Zhuyun Ren ◽  
Jiamiao Yang ◽  
Wenxiang Feng ◽  
Bo Li

Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 811
Author(s):  
Dan Yang ◽  
Guoru Liu ◽  
Mengcheng Ren ◽  
Bin Xu ◽  
Jiao Wang

Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations—rotating, mirroring, shifting and cropping—are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.


2020 ◽  
Vol 10 (5) ◽  
pp. 1023-1032
Author(s):  
Lin Qi ◽  
Haoran Zhang ◽  
Xuehao Cao ◽  
Xuyang Lyu ◽  
Lisheng Xu ◽  
...  

Accurate segmentation of the blood pool of left ventricle (LV) and myocardium (or left ventricular epicardium, MYO) from cardiac magnetic resonance (MR) can help doctors to quantify LV ejection fraction and myocardial deformation. To reduce doctor’s burden of manual segmentation, in this study, we propose an automated and concurrent segmentation method of the LV and MYO. First, we employ a convolutional neural network (CNN) architecture to extract the region of interest (ROI) from short-axis cardiac cine MR images as a preprocessing step. Next, we present a multi-scale feature fusion (MSFF) CNN with a new weighted Dice index (WDI) loss function to get the concurrent segmentation of the LV and MYO. We use MSFF modules with three scales to extract different features, and then concatenate feature maps by the short and long skip connections in the encoder and decoder path to capture more complete context information and geometry structure for better segmentation. Finally, we compare the proposed method with Fully Convolutional Networks (FCN) and U-Net on the combined cardiac datasets from MICCAI 2009 and ACDC 2017. Experimental results demonstrate that the proposed method could perform effectively on LV and MYOs segmentation in the combined datasets, indicating its potential for clinical application.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 403
Author(s):  
Xun Zhang ◽  
Lanyan Yang ◽  
Bin Zhang ◽  
Ying Liu ◽  
Dong Jiang ◽  
...  

The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures.


Sign in / Sign up

Export Citation Format

Share Document