mr brain images
Recently Published Documents


TOTAL DOCUMENTS

282
(FIVE YEARS 51)

H-INDEX

28
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Evelina Thunell ◽  
Moa G Peter ◽  
Vincent Lenoir ◽  
Patrik Andersson ◽  
Basile N Landis ◽  
...  

Reduced olfactory function is the symptom with the highest prevalence in COVID-19 with nearly 70% of individuals with COVID-19 experiencing partial or total loss of their sense of smell at some point during the disease. The exact cause is not known but beyond peripheral damage, studies have demonstrated insults to both the olfactory bulb and central olfactory brain areas. However, these studies often lack both baseline pre-COVID-19 assessments and a control group and could therefore simply reflect preexisting risk factors. Right before the COVID-19 outbreak, we completed an olfactory focused study including structural MR brain images and a full clinical olfactory test. Opportunistically, we invited participants back one year later, including 9 participants who had experienced mild to medium COVID-19 (C19+) and 12 that had not (C19-), thereby creating a pre-post controlled natural experiment with a control group. Despite C19+ participants reporting subjective olfactory dysfunction, few showed signs of objectively altered function one year later. Critically, all but one individual in the C19+ group had reduced olfactory bulb volume with an average volume reduction of 14.3%, but this did not amount to a significant between group difference compared to the control group (2.3% reduction) using inference statistics. No morphological differences in cerebral olfactory areas were found but we found stronger functional connectivity between olfactory brain areas in the C19+ croup at the post measure. Taken together, these data suggest that COVID-19 might cause a long-term reduction in olfactory bulb volume but with no discernible differences in cerebral olfactory regions.


Author(s):  
C. C. Benson ◽  
V. L. Lajish ◽  
Kumar Rajamani

Fully automatic brain image classification of MR brain images is of great importance for research and clinical studies, since the precise detection may lead to a better treatment. In this work, an efficient method based on Multiple-Instance Learning (MIL) is proposed for the automatic classification of low-grade and high-grade MR brain tumor images. The main advantage of MIL-based approach over other classification methods is that MIL considers an image as a group of instances rather than a single instance, thus facilitating an effective learning process. The mi-Graph-based MIL approach is proposed for this classification. Two different implementations of MIL-based classification, viz. Patch-based MIL (PBMIL) and Superpixel-based MIL (SPBMIL), are made in this study. The combined feature set of LBP, SIFT and FD is used for the classification. The accuracies of low-grade–high-grade tumor image classification algorithm using SPBMIL method performed on [Formula: see text], [Formula: see text] and FLAIR images read 99.2765%, 99.4195% and 99.2265%, respectively. The error rate of the proposed classification system was noted to be insignificant and hence this automated classification system could be used for the classification of images with different pathological conditions, types and disease statuses.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ahana Priyanka ◽  
Kavitha Ganesan

Abstract The diagnostic and clinical overlap of early mild cognitive impairment (EMCI), mild cognitive impairment (MCI), late mild cognitive impairment (LMCI) and Alzheimer disease (AD) is a vital oncological issue in dementia disorder. This study is designed to examine Whole brain (WB), grey matter (GM) and Hippocampus (HC) morphological variation and identify the prominent biomarkers in MR brain images of demented subjects to understand the severity progression. Curve evolution based on shape constraint is carried out to segment the complex brain structure such as HC and GM. Pre-trained models are used to observe the severity variation in these regions. This work is evaluated on ADNI database. The outcome of the proposed work shows that curve evolution method could segment HC and GM regions with better correlation. Pre-trained models are able to show significant severity difference among WB, GM and HC regions for the considered classes. Further, prominent variation is observed between AD vs. EMCI, AD vs. MCI and AD vs. LMCI in the whole brain, GM and HC. It is concluded that AlexNet model for HC region result in better classification for AD vs. EMCI, AD vs. MCI and AD vs. LMCI with an accuracy of 93, 78.3 and 91% respectively.


2021 ◽  
Vol 7 (2) ◽  
pp. 763-766
Author(s):  
Sreelakshmi Shaji ◽  
Nagarajan Ganapathy ◽  
Ramakrishnan Swaminathan

Abstract Alzheimer’s Disease (AD) is an irreversible progressive neurodegenerative disorder. Magnetic Resonance (MR) imaging based deep learning models with visualization capabilities are essential for the precise diagnosis of AD. In this study, an attempt has been made to categorize AD and Healthy Controls (HC) using structural MR images and an Inception-Residual Network (ResNet) model. For this, T1- weighted MR brain images are acquired from a public database. These images are pre-processed and are applied to a two-layer Inception-ResNet-A model. Additionally, Gradient weighted Class Activation Mapping (Grad-CAM) is employed to visualize the significant regions in MR images identified by the model for AD classification. The network performance is validated using standard evaluation metrics. Results demonstrate that the proposed Inception-ResNet model differentiates AD from HC using MR brain images. The model achieves an average recall and precision of 69%. The Grad- CAM visualization identified lateral ventricles in the mid-axial slice as the most discriminative brain regions for AD classification. Thus, the computer aided diagnosis study could be useful in the visualization and automated analysis of AD diagnosis with minimal medical expertise.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1589
Author(s):  
Isselmou Abd El Kader ◽  
Guizhi Xu ◽  
Zhang Shuai ◽  
Sani Saminu ◽  
Imran Javaid ◽  
...  

The process of diagnosing brain tumors is very complicated for many reasons, including the brain’s synaptic structure, size, and shape. Machine learning techniques are employed to help doctors to detect brain tumor and support their decisions. In recent years, deep learning techniques have made a great achievement in medical image analysis. This paper proposed a deep wavelet autoencoder model named “DWAE model”, employed to divide input data slice as a tumor (abnormal) or no tumor (normal). This article used a high pass filter to show the heterogeneity of the MRI images and their integration with the input images. A high median filter was utilized to merge slices. We improved the output slices’ quality through highlight edges and smoothened input MR brain images. Then, we applied the seed growing method based on 4-connected since the thresholding cluster equal pixels with input MR data. The segmented MR image slices provide two two-layer using the proposed deep wavelet auto-encoder model. We then used 200 hidden units in the first layer and 400 hidden units in the second layer. The softmax layer testing and training are performed for the identification of the MR image normal and abnormal. The contribution of the deep wavelet auto-encoder model is in the analysis of pixel pattern of MR brain image and the ability to detect and classify the tumor with high accuracy, short time, and low loss validation. To train and test the overall performance of the proposed model, we utilized 2500 MR brain images from BRATS2012, BRATS2013, BRATS2014, BRATS2015, 2015 challenge, and ISLES, which consists of normal and abnormal images. The experiments results show that the proposed model achieved an accuracy of 99.3%, loss validation of 0.1, low FPR and FNR values. This result demonstrates that the proposed DWAE model can facilitate the automatic detection of brain tumors.


2021 ◽  
Author(s):  
Riccardo De Feo ◽  
Elina Hamalainen ◽  
Eppu Manninen ◽  
Riikka Immonen ◽  
Juan Miguel Valverde ◽  
...  

Registration-based methods are commonly used in the anatomical segmentation of magnetic resonance (MR) brain images. However, they are sensitive to the presence of deforming brain pathologies that may interfere with the alignment of the atlas image with the target image. Our goal was to develop an algorithm for automated segmentation of the normal and injured rat hippocampus. We implemented automated segmentation using a U-Net-like Convolutional Neural Network (CNN). of sham-operated experimental controls and rats with lateral-fluid-percussion induced traumatic brain injury (TBI) on MR images and trained ensembles of CNNs. Their performance was compared to three registration-based methods: single-atlas, multi-atlas based on majority voting and Similarity and Truth Estimation for Propagated Segmentations (STEPS). Then, the automatic segmentations were quantitatively evaluated using six metrics: Dice score, Hausdorff distance, precision, recall, volume similarity and compactness using cross-validation. Our CNN and multi-atlas -based segmentations provided excellent results (Dice scores > 0.90) despite the presence of brain lesions, atrophy and ventricular enlargement. In contrast, the performance of singe-atlas registration was poor (Dice scores < 0.85). Unlike registration-based methods, which performed better in segmenting the contralateral than the ipsilateral hippocampus, our CNN-based method performed equally well bilaterally. Finally, we assessed the progression of hippocampal damage after TBI by applying our automated segmentation tool. Our data show that the presence of TBI, time after TBI, and whether the location of the hippocampus was ipsilateral or contralateral to the injury explained hippocampal volume (p=0.029, p< 0.001, and p< 0.001 respectively).


Author(s):  
Jin Zhu ◽  
Chuan Tan ◽  
Junwei Yang ◽  
Guang Yang ◽  
Pietro Lio’

Single image super-resolution (SISR) aims to obtain a high-resolution output from one low-resolution image. Currently, deep learning-based SISR approaches have been widely discussed in medical image processing, because of their potential to achieve high-quality, high spatial resolution images without the cost of additional scans. However, most existing methods are designed for scale-specific SR tasks and are unable to generalize over magnification scales. In this paper, we propose an approach for medical image arbitrary-scale super-resolution (MIASSR), in which we couple meta-learning with generative adversarial networks (GANs) to super-resolve medical images at any scale of magnification in [Formula: see text]. Compared to state-of-the-art SISR algorithms on single-modal magnetic resonance (MR) brain images (OASIS-brains) and multi-modal MR brain images (BraTS), MIASSR achieves comparable fidelity performance and the best perceptual quality with the smallest model size. We also employ transfer learning to enable MIASSR to tackle SR tasks of new medical modalities, such as cardiac MR images (ACDC) and chest computed tomography images (COVID-CT). The source code of our work is also public. Thus, MIASSR has the potential to become a new foundational pre-/post-processing step in clinical image analysis tasks such as reconstruction, image quality enhancement, and segmentation.


2021 ◽  
Vol 1964 (6) ◽  
pp. 062029
Author(s):  
B Papachary ◽  
M Amru ◽  
S Rama Kishore Reddy

Author(s):  
Sreelakshmi Shaji ◽  
Nagarajan Ganapathy ◽  
Ramakrishnan Swaminathan

In this study, an attempt has been made to differentiate Alzheimer’s Disease (AD) stages in structural Magnetic Resonance (MR) images using single inception module network. For this, T1-weighted MR brain images of AD, mild cognitive impairment and Normal Controls (NC) are obtained from a public database. From the images, significant features are extracted and classified using an inception module network. The performance of the model is computed and analyzed for different input image sizes. Results show that the single inception module is able to classify AD stages using MR images. The end-to-end network differentiates AD from NC with 85% precision. The model is found to be effective for varied sizes of input images. Since the proposed approach is able to categorize AD stages, single inception module networks could be used for the automated AD diagnosis with minimum medical expertise.


Sign in / Sign up

Export Citation Format

Share Document