HFRU-Net: High-Level Feature Fusion and Recalibration UNet for Automatic Liver and Tumor Segmentation in CT Images

Author(s):  
Devidas T. Kushnure ◽  
Sanjay N. Talbar
2019 ◽  
Vol 56 (11) ◽  
pp. 111001
Author(s):  
贺琪 Qi He ◽  
李瑶 Yao Li ◽  
宋巍 Wei Song ◽  
黄冬梅 Dongmei Huang ◽  
何盛琪 Shengqi He ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Haimei Li ◽  
Bing Liu ◽  
Yongtao Zhang ◽  
Chao Fu ◽  
Xiaowei Han ◽  
...  

Automatic segmentation of gastric tumor not only provides image-guided clinical diagnosis but also assists radiologists to read images and improve the diagnostic accuracy. However, due to the inhomogeneous intensity distribution of gastric tumors in CT scans, the ambiguous/missing boundaries, and the highly variable shapes of gastric tumors, it is quite challenging to develop an automatic solution. This study designs a novel 3D improved feature pyramidal network (3D IFPN) to automatically segment gastric tumors in computed tomography (CT) images. To meet the challenges of this extremely difficult task, the proposed 3D IFPN makes full use of the complementary information within the low and high layers of deep convolutional neural networks, which is equipped with three types of feature enhancement modules: 3D adaptive spatial feature fusion (ASFF) module, single-level feature refinement (SLFR) module, and multi-level feature refinement (MLFR) module. The 3D ASFF module adaptively suppresses the feature inconsistency in different levels and hence obtains the multi-level features with high feature invariance. Then, the SLFR module combines the adaptive features and previous multi-level features at each level to generate the multi-level refined features by skip connection and attention mechanism. The MLFR module adaptively recalibrates the channel-wise and spatial-wise responses by adding the attention operation, which improves the prediction capability of the network. Furthermore, a stage-wise deep supervision (SDS) mechanism and a hybrid loss function are also embedded to enhance the feature learning ability of the network. CT volumes dataset collected in three Chinese medical centers was used to evaluate the segmentation performance of the proposed 3D IFPN model. Experimental results indicate that our method outperforms state-of-the-art segmentation networks in gastric tumor segmentation. Moreover, to explore the generalization for other segmentation tasks, we also extend the proposed network to liver tumor segmentation in CT images of the MICCAI 2017 Liver Tumor Segmentation Challenge.


2021 ◽  
Vol 11 (8) ◽  
pp. 2231-2242
Author(s):  
Fei Gao ◽  
Kai Qiao ◽  
Jinjin Hai ◽  
Bin Yan ◽  
Minghui Wu ◽  
...  

The goal of this research is to achieve accurate segmentation of liver tumors in noncontrast T2-weighted magnetic resonance imaging. As liver tumors and adjacent organs are represented by pixels of very similar gray intensity, segmentation is challenging, and the presence of different sizes of liver tumor makes segmentation more difficult. Differing from previous work to capture contextual information using multiscale feature fusion with concatenation, attention mechanism is added to our segmentation model to extract precise global contextual information for pixel labeling without requiring complex dilated convolution. This study describe a liver lesion segmentation model derived from FC-DenseNet with attention mechanism. Specifically, a global attention module (GAM) is added to up-sampling path, and high-level features are processed by the GAM to generating weighting information for guiding high resolution detail features recovery. High-level features are very effective for accurate category classification, but relatively weak at pixel classification and predicting restoration of the original resolution, so the fusion of high-level semantic features and low-level detail features can improve segmentation accuracy. A weighted focal loss function is used to solve the problem of lesion area occupying a relatively low proportion of the whole image, and to deal with the disequilibrium of foreground and background in the training liver lesion images. Experimental results show our segmentation model can automatically segment liver tumors from complete MRI images, and the addition of the GAM model can effectively improve liver tumor segmentation. Our algorithms have obvious advantages over other CNN algorithms and traditional manual methods of feature extraction.


Author(s):  
Nermeen Elmenabawy ◽  
Mervat El-Seddek ◽  
Hossam El-Din Moustafa ◽  
Ahmed Elnakib

A pipelined framework is proposed for accurate, automated, simultaneous segmentation of the liver as well as the hepatic tumors from computed tomography (CT) images. The introduced framework composed of three pipelined levels. First, two different transfers deep convolutional neural networks (CNN) are applied to get high-level compact features of CT images. Second, a pixel-wise classifier is used to obtain two output-classified maps for each CNN model. Finally, a fusion neural network (FNN) is used to integrate the two maps. Experimentations performed on the MICCAI’2017 database of the liver tumor segmentation (LITS) challenge, result in a dice similarity coefficient (DSC) of 93.5% for the segmentation of the liver and of 74.40% for the segmentation of the lesion, using a 5-fold cross-validation scheme. Comparative results with the state-of-the-art techniques on the same data show the competing performance of the proposed framework for simultaneous liver and tumor segmentation.


2021 ◽  
Vol 13 (3) ◽  
pp. 72
Author(s):  
Shengbo Chen ◽  
Hongchang Zhang ◽  
Zhou Lei

Person re-identification (ReID) plays a significant role in video surveillance analysis. In the real world, due to illumination, occlusion, and deformation, pedestrian features extraction is the key to person ReID. Considering the shortcomings of existing methods in pedestrian features extraction, a method based on attention mechanism and context information fusion is proposed. A lightweight attention module is introduced into ResNet50 backbone network equipped with a small number of network parameters, which enhance the significant characteristics of person and suppress irrelevant information. Aiming at the problem of person context information loss due to the over depth of the network, a context information fusion module is designed to sample the shallow feature map of pedestrians and cascade with the high-level feature map. In order to improve the robustness, the model is trained by combining the loss of margin sample mining with the loss function of cross entropy. Experiments are carried out on datasets Market1501 and DukeMTMC-reID, our method achieves rank-1 accuracy of 95.9% on the Market1501 dataset, and 90.1% on the DukeMTMC-reID dataset, outperforming the current mainstream method in case of only using global feature.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-35
Author(s):  
Chenning Li ◽  
Zhichao Cao ◽  
Yunhao Liu

With the development of the Internet of Things (IoT), many kinds of wireless signals (e.g., Wi-Fi, LoRa, RFID) are filling our living and working spaces nowadays. Beyond communication, wireless signals can sense the status of surrounding objects, known as wireless sensing , with their reflection, scattering, and refraction while propagating in space. In the last decade, many sophisticated wireless sensing techniques and systems were widely studied for various applications (e.g., gesture recognition, localization, and object imaging). Recently, deep Artificial Intelligence (AI), also known as Deep Learning (DL), has shown great success in computer vision. And some works have initially proved that deep AI can benefit wireless sensing as well, leading to a brand-new step toward ubiquitous sensing. In this survey, we focus on the evolution of wireless sensing enhanced by deep AI techniques. We first present a general workflow of Wireless Sensing Systems (WSSs) which consists of signal pre-processing, high-level feature, and sensing model formulation. For each module, existing deep AI-based techniques are summarized, further compared with traditional approaches. Then, we provide a view of issues and challenges induced by combining deep AI and wireless sensing together. Finally, we discuss the future trends of deep AI to enable ubiquitous wireless sensing.


Sign in / Sign up

Export Citation Format

Share Document