scholarly journals Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images

2021 ◽  
Vol 7 (10) ◽  
pp. 200
Author(s):  
Andrik Rampun ◽  
Deborah Jarvis ◽  
Paul D. Griffiths ◽  
Reyer Zwiggelaar ◽  
Bryan W. Scotney ◽  
...  

In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a deeper architecture and takes account of the features extracted from each side output. It acts similar to an ensemble neural network, however, instead of averaging the outputs from several independently trained models, which is computationally expensive, our approach combines outputs from a single network to reduce the variance of predications and generalization errors. Experimental results using 200 normal foetal brains consisting of over 11,500 2D images produced Dice and Jaccard coefficients of 94.2 ± 5.9% and 88.7 ± 6.9%, respectively. We further tested the proposed network on 54 abnormal cases (over 3500 images) and achieved Dice and Jaccard coefficients of 91.2 ± 6.8% and 85.7 ± 6.6%, respectively.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seung Kwan Kang ◽  
Seong A. Shin ◽  
Seongho Seo ◽  
Min Soo Byun ◽  
Dong Young Lee ◽  
...  

AbstractThe detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.


2021 ◽  
Vol 23 ◽  
pp. 100521
Author(s):  
Beaudelaire Saha Tchinda ◽  
Daniel Tchiotsop ◽  
Michel Noubom ◽  
Valerie Louis-Dorr ◽  
Didier Wolf

2020 ◽  
Vol 152 ◽  
pp. S221-S222
Author(s):  
M. Buschmann ◽  
L. Fetty ◽  
G. Heilemann ◽  
M. Heilmann ◽  
P. Kuess ◽  
...  

1999 ◽  
Vol 17 (2) ◽  
pp. 257-266 ◽  
Author(s):  
Chun Yuan ◽  
Eugene Lin ◽  
Jacob Millard ◽  
Jenq–Neng Hwang

2021 ◽  
pp. 20210038
Author(s):  
Wutian Gan ◽  
Hao Wang ◽  
Hengle Gu ◽  
Yanhua Duan ◽  
Yan Shao ◽  
...  

Objective: A stable and accurate automatic tumor delineation method has been developed to facilitate the intelligent design of lung cancer radiotherapy process. The purpose of this paper is to introduce an automatic tumor segmentation network for lung cancer on CT images based on deep learning. Methods: In this paper, a hybrid convolution neural network (CNN) combining 2D CNN and 3D CNN was implemented for the automatic lung tumor delineation using CT images. 3D CNN used V-Net model for the extraction of tumor context information from CT sequence images. 2D CNN used an encoder–decoder structure based on dense connection scheme, which could expand information flow and promote feature propagation. Next, 2D features and 3D features were fused through a hybrid module. Meanwhile, the hybrid CNN was compared with the individual 3D CNN and 2D CNN, and three evaluation metrics, Dice, Jaccard and Hausdorff distance (HD), were used for quantitative evaluation. The relationship between the segmentation performance of hybrid network and the GTV volume size was also explored. Results: The newly introduced hybrid CNN was trained and tested on a dataset of 260 cases, and could achieve a median value of 0.73, with mean and stand deviation of 0.72 ± 0.10 for the Dice metric, 0.58 ± 0.13 and 21.73 ± 13.30 mm for the Jaccard and HD metrics, respectively. The hybrid network significantly outperformed the individual 3D CNN and 2D CNN in the three examined evaluation metrics (p < 0.001). A larger GTV present a higher value for the Dice metric, but its delineation at the tumor boundary is unstable. Conclusions: The implemented hybrid CNN was able to achieve good lung tumor segmentation performance on CT images. Advances in knowledge: The hybrid CNN has valuable prospect with the ability to segment lung tumor.


Sign in / Sign up

Export Citation Format

Share Document