Semantic-Based Brain MRI Image Segmentation Using Convolutional Neural Network

Author(s):  
Yao Chou ◽  
Dah Jye Lee ◽  
Dong Zhang
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jinling Zhang ◽  
Jun Yang ◽  
Min Zhao

To study the influence of different sequences of magnetic resonance imaging (MRI) images on the segmentation of hepatocellular carcinoma (HCC) lesions, the U-Net was improved. Moreover, deep fusion network (DFN), data enhancement strategy, and random data (RD) strategy were introduced, and a multisequence MRI image segmentation algorithm based on DFN was proposed. The segmentation experiments of single-sequence MRI image and multisequence MRI image were designed, and the segmentation result of single-sequence MRI image was compared with those of convolutional neural network (FCN) algorithm. In addition, RD experiment and single-input experiment were also designed. It was found that the sensitivity (0.595 ± 0.145) and DSC (0.587 ± 0.113) obtained by improved U-Net were significantly higher than the sensitivity (0.405 ± 0.098) and DSC (0.468 ± 0.115, P < 0.05 ) obtained by U-Net. The sensitivity of multisequence MRI image segmentation algorithm based on DFN (0.779 ± 0.015) was significantly higher than that of FCN algorithm (0.604 ± 0.056, P < 0.05 ). The multisequence MRI image segmentation algorithm based on the DFN had higher indicators for liver cancer lesions than those of the improved U-Net. When RD was added, it not only increased the DSC of the single-sequence network enhanced by the hepatocyte-specific magnetic resonance contrast agent (Gd-EOB-DTPA) by 1% but also increased the DSC of the multisequence MRI image segmentation algorithm based on DFN by 7.6%. In short, the improved U-Net can significantly improve the recognition rate of small lesions in liver cancer patients. The addition of RD strategy improved the segmentation indicators of liver cancer lesions of the DFN and can fuse image features of multiple sequences, thereby improving the accuracy of lesion segmentation.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


2021 ◽  
Vol 7 (2) ◽  
pp. 37
Author(s):  
Isah Charles Saidu ◽  
Lehel Csató

We present a sample-efficient image segmentation method using active learning, we call it Active Bayesian UNet, or AB-UNet. This is a convolutional neural network using batch normalization and max-pool dropout. The Bayesian setup is achieved by exploiting the probabilistic extension of the dropout mechanism, leading to the possibility to use the uncertainty inherently present in the system. We set up our experiments on various medical image datasets and highlight that with a smaller annotation effort our AB-UNet leads to stable training and better generalization. Added to this, we can efficiently choose from an unlabelled dataset.


Sign in / Sign up

Export Citation Format

Share Document