scholarly journals Multi-Scale Squeeze U-SegNet with Multi Global Attention for Brain MRI Segmentation

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3363
Author(s):  
Chaitra Dayananda ◽  
Jae-Young Choi ◽  
Bumshik Lee

In this paper, we propose a multi-scale feature extraction with novel attention-based convolutional learning using the U-SegNet architecture to achieve segmentation of brain tissue from a magnetic resonance image (MRI). Although convolutional neural networks (CNNs) show enormous growth in medical image segmentation, there are some drawbacks with the conventional CNN models. In particular, the conventional use of encoder-decoder approaches leads to the extraction of similar low-level features multiple times, causing redundant use of information. Moreover, due to inefficient modeling of long-range dependencies, each semantic class is likely to be associated with non-accurate discriminative feature representations, resulting in low accuracy of segmentation. The proposed global attention module refines the feature extraction and improves the representational power of the convolutional neural network. Moreover, the attention-based multi-scale fusion strategy can integrate local features with their corresponding global dependencies. The integration of fire modules in both the encoder and decoder paths can significantly reduce the computational complexity owing to fewer model parameters. The proposed method was evaluated on publicly accessible datasets for brain tissue segmentation. The experimental results show that our proposed model achieves segmentation accuracies of 94.81% for cerebrospinal fluid (CSF), 95.54% for gray matter (GM), and 96.33% for white matter (WM) with a noticeably reduced number of learnable parameters. Our study shows better segmentation performance, improving the prediction accuracy by 2.5% in terms of dice similarity index while achieving a 4.5 times reduction in the number of learnable parameters compared to previously developed U-SegNet based segmentation approaches. This demonstrates that the proposed approach can achieve reliable and precise automatic segmentation of brain MRI images.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3232
Author(s):  
Jiao-Song Long ◽  
Guang-Zhi Ma ◽  
En-Min Song ◽  
Ren-Chao Jin

Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


In current years, the grouping has become well identified for numerous investigators due to several application fields like communication, wireless networking, and biomedical domain and so on. So, much different research has already been made by the investigators to progress an improved system for grouping. One of the familiar investigations is an optimization that has been efficiently applied for grouping. In this paper, propose a method of Hybrid Bee Colony and Cuckoo Search (HBCCS) based centroid initialization for fuzzy c-means clustering (FCM) in bio-medical image segmentation (HBCC-KFCM-BIM). For MRI brain tissue segmentation, KFCM is most preferable technique because of its performance. The major limitation of the conventional KFCM is random centroids initialization because it leads to rising the execution time to reach the best resolution. In order to accelerate the segmentation process, HBCCS is used to adjust the centroids of required clusters. The quantitative measures of results were compared using the metrics are the number of iterations and processing time. The number of iterations and processing of HBCC-KFCM-BIM method take minimum value while compared to conventional KFCM. The HBCC-KFCM-BIM method is very efficient and faster than conventional KFCM for brain tissue segmentation.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shahab U. Ansari ◽  
Kamran Javed ◽  
Saeed Mian Qaisar ◽  
Rashad Jillani ◽  
Usman Haider

Multiple sclerosis (MS) is a chronic and autoimmune disease that forms lesions in the central nervous system. Quantitative analysis of these lesions has proved to be very useful in clinical trials for therapies and assessing disease prognosis. However, the efficacy of these quantitative analyses greatly depends on how accurately the MS lesions have been identified and segmented in brain MRI. This is usually carried out by radiologists who label 3D MR images slice by slice using commonly available segmentation tools. However, such manual practices are time consuming and error prone. To circumvent this problem, several automatic segmentation techniques have been investigated in recent years. In this paper, we propose a new framework for automatic brain lesion segmentation that employs a novel convolutional neural network (CNN) architecture. In order to segment lesions of different sizes, we have to pick a specific filter or size 3 × 3 or 5 × 5. Sometimes, it is hard to decide which filter will work better to get the best results. Google Net has solved this problem by introducing an inception module. An inception module uses 3 × 3 , 5 × 5 , 1 × 1 and max pooling filters in parallel fashion. Results show that incorporating inception modules in a CNN has improved the performance of the network in the segmentation of MS lesions. We compared the results of the proposed CNN architecture for two loss functions: binary cross entropy (BCE) and structural similarity index measure (SSIM) using the publicly available ISBI-2015 challenge dataset. A score of 93.81 which is higher than the human rater with BCE loss function is achieved.


2015 ◽  
Vol 72 (2) ◽  
Author(s):  
Sapideh Yazdani ◽  
Rubiyah Yusof ◽  
Alireza Karimian ◽  
Amir Hossein Riazi

Automatic segmentation of brain images is a challenging problem due to the complex structure of brain images, as well as to the absence of anatomy models. Brain segmentation into white matter, gray matter, and cerebral spinal fluid, is an important stage for many problems, including the studies in 3-D visualizations for disease detection and surgical planning. In this paper we present a novel fully automated framework for tissue classification of brain in MR Images that is a combination of two techniques: GLCM and SVM, each of which has been customized for the problem of brain tissue segmentation such that the results are more robust than its individual components that is demonstrated through experiments.  The proposed framework has been validated on brainweb dataset of different modalities, with desirable performance in the presence of noise and bias field. To evaluate the performance of the proposed method the Kappa similarity index is computed. Our method achieves higher kappa index (91.5) compared with other methods currently in use. As an application, our method has been used for segmentation of MR images with promising results.


2021 ◽  
Vol 15 ◽  
Author(s):  
Irina Grigorescu ◽  
Lucy Vanes ◽  
Alena Uus ◽  
Dafnis Batalle ◽  
Lucilio Cordero-Grande ◽  
...  

Deep learning based medical image segmentation has shown great potential in becoming a key part of the clinical analysis pipeline. However, many of these models rely on the assumption that the train and test data come from the same distribution. This means that such methods cannot guarantee high quality predictions when the source and target domains are dissimilar due to different acquisition protocols, or biases in patient cohorts. Recently, unsupervised domain adaptation techniques have shown great potential in alleviating this problem by minimizing the shift between the source and target distributions, without requiring the use of labeled data in the target domain. In this work, we aim to predict tissue segmentation maps on T2-weighted magnetic resonance imaging data of an unseen preterm-born neonatal population, which has both different acquisition parameters and population bias when compared to our training data. We achieve this by investigating two unsupervised domain adaptation techniques with the objective of finding the best solution for our problem. We compare the two methods with a baseline fully-supervised segmentation network and report our results in terms of Dice scores obtained on our source test dataset. Moreover, we analyse tissue volumes and cortical thickness measures of the harmonized data on a subset of the population matched for gestational age at birth and postmenstrual age at scan. Finally, we demonstrate the applicability of the harmonized cortical gray matter maps with an analysis comparing term and preterm-born neonates and a proof-of-principle investigation of the association between cortical thickness and a language outcome measure.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3874
Author(s):  
Nagesh Subbanna ◽  
Matthias Wilms ◽  
Anup Tuladhar ◽  
Nils D. Forkert

Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of 0.73±0.12 and 0.61±0.12 for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.


Author(s):  
ZunHyan Rieu ◽  
Donghyeon Kim ◽  
JeeYoung Kim ◽  
Regina EY Kim ◽  
Minho Lee ◽  
...  

White matter hyperintensity (WMH) has been considered the primary biomarker from small-vessel cerebrovascular disease to Alzheimer’s disease (AD) and has been reported for its correlation of brain structural changes. To perform WMH related analysis with brain structure, both T1-weighted (T1w) and (Fluid Attenuated Inversion Recovery(FLAIR) are required. However, in a clinical situation, it is limited to obtain 3D T1w and FLAIR images simultaneously. Also, the most of brain segmentation technique supports 3D T1w only. Therefore, we introduced the semi-supervised learning method that can perform brain segmentation using FLAIR image only. Our method achieved a dice overlap score of 0.86 for brain tissue segmentation on FLAIR, with the relative volume difference between T1w and FLAIR segmentation under 4.8%, which is just as reliable as the segmentation done by its paired T1w image. We believe our semi-supervised learning method has a great potential to be used to other MRI sequences and provide encouragement to people who seek brain tissue segmentation from a non-T1w image.


Sign in / Sign up

Export Citation Format

Share Document