3D hierarchical dual-attention fully convolutional networks with hybrid losses for diverse glioma segmentation

2021 ◽  
pp. 107692
Author(s):  
Deting Kong ◽  
Xiyu Liu ◽  
Yan Wang ◽  
Dengwang Li ◽  
Jie Xue
IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Jeremy M. Webb ◽  
Duane D. Meixner ◽  
Shaheeda A. Adusei ◽  
Eric C. Polley ◽  
Mostafa Fatemi ◽  
...  

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 144
Author(s):  
Yuexing Han ◽  
Xiaolong Li ◽  
Bing Wang ◽  
Lu Wang

Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.


2021 ◽  
Vol 10 (8) ◽  
pp. 523
Author(s):  
Nicholus Mboga ◽  
Stefano D’Aronco ◽  
Tais Grippa ◽  
Charlotte Pelletier ◽  
Stefanos Georganos ◽  
...  

Multitemporal environmental and urban studies are essential to guide policy making to ultimately improve human wellbeing in the Global South. Land-cover products derived from historical aerial orthomosaics acquired decades ago can provide important evidence to inform long-term studies. To reduce the manual labelling effort by human experts and to scale to large, meaningful regions, we investigate in this study how domain adaptation techniques and deep learning can help to efficiently map land cover in Central Africa. We propose and evaluate a methodology that is based on unsupervised adaptation to reduce the cost of generating reference data for several cities and across different dates. We present the first application of domain adaptation based on fully convolutional networks for semantic segmentation of a dataset of historical panchromatic orthomosaics for land-cover generation for two focus cities Goma-Gisenyi and Bukavu. Our experimental evaluation shows that the domain adaptation methods can reach an overall accuracy between 60% and 70% for different regions. If we add a small amount of labelled data from the target domain, too, further performance gains can be achieved.


2021 ◽  
Vol 13 (14) ◽  
pp. 2794
Author(s):  
Shuhao Ran ◽  
Xianjun Gao ◽  
Yuanwei Yang ◽  
Shaohua Li ◽  
Guangbin Zhang ◽  
...  

Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation.


2021 ◽  
Vol 104 (3) ◽  
pp. 003685042110381
Author(s):  
Xue Bai ◽  
Ze Liu ◽  
Jie Zhang ◽  
Shengye Wang ◽  
Qing Hou ◽  
...  

Fully convolutional networks were developed for predicting optimal dose distributions for patients with left-sided breast cancer and compared the prediction accuracy between two-dimensional and three-dimensional networks. Sixty cases treated with volumetric modulated arc radiotherapy were analyzed. Among them, 50 cases were randomly chosen to conform the training set, and the remaining 10 were to construct the test set. Two U-Net fully convolutional networks predicted the dose distributions, with two-dimensional and three-dimensional convolution kernels, respectively. Computed tomography images, delineated regions of interest, or their combination were considered as input data. The accuracy of predicted results was evaluated against the clinical dose. Most types of input data retrieved a similar dose to the ground truth for organs at risk ( p > 0.05). Overall, the two-dimensional model had higher performance than the three-dimensional model ( p < 0.05). Moreover, the two-dimensional region of interest input provided the best prediction results regarding the planning target volume mean percentage difference (2.40 ± 0.18%), heart mean percentage difference (4.28 ± 2.02%), and the gamma index at 80% of the prescription dose are with tolerances of 3 mm and 3% (0.85 ± 0.03), whereas the two-dimensional combined input provided the best prediction regarding ipsilateral lung mean percentage difference (4.16 ± 1.48%), lung mean percentage difference (2.41 ± 0.95%), spinal cord mean percentage difference (0.67 ± 0.40%), and 80% Dice similarity coefficient (0.94 ± 0.01). Statistically, the two-dimensional combined inputs achieved higher prediction accuracy regarding 80% Dice similarity coefficient than the two-dimensional region of interest input (0.94 ± 0.01 vs 0.92 ± 0.01, p < 0.05). The two-dimensional data model retrieves higher performance than its three-dimensional counterpart for dose prediction, especially when using region of interest and combined inputs.


Sign in / Sign up

Export Citation Format

Share Document