scholarly journals Multi-Modal Biomarkers of Cerebral Edema in Low Resolution MRI

2020 ◽  
Author(s):  
Danni Tu ◽  
Manu S. Goyal ◽  
Jordan D. Dworkin ◽  
Samuel Kampondeni ◽  
Lorenna Vidal ◽  
...  

A central challenge of medical imaging studies is to extract quantitative biomarkers that characterize pathology or predict disease outcomes. In high-resolution, high-quality magnetic resonance images (MRI), state-of-the-art approaches have performed well. However, such methods may not translate to low resolution, lower quality images acquired on MRI scanners with lower magnetic field strength. Therefore, in low-resource settings where low field scanners are more common and there is a shortage of available radiologists to manually interpret MRI scans, it is essential to develop automated methods that can accommodate lower quality images and augment or replace manual interpretation. Motivated by a project in which a cohort of children with cerebral malaria were imaged using 0.35 Tesla MRI to evaluate the degree of diffuse brain swelling, we introduce a fully automated framework to translate radiological diagnostic criteria into image-based biomarkers. We integrate multi-atlas label fusion, which leverages high-resolution images from another sample as prior spatial information, with parametric Gaussian hidden Markov models based on image intensities, to create a robust method for determining ventricular cerebrospinal fluid volume. We further propose normalized image intensity and texture measurements to determine the loss of gray-to-white matter tissue differentiation and sulcal effacement. These integrated biomarkers are found to have excellent classification performance for determining severe cerebral edema due to cerebral malaria.

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4601
Author(s):  
Juan Wen ◽  
Yangjing Shi ◽  
Xiaoshi Zhou ◽  
Yiming Xue

Currently, various agricultural image classification tasks are carried out on high-resolution images. However, in some cases, we cannot get enough high-resolution images for classification, which significantly affects classification performance. In this paper, we design a crop disease classification network based on Enhanced Super-Resolution Generative adversarial networks (ESRGAN) when only an insufficient number of low-resolution target images are available. First, ESRGAN is used to recover super-resolution crop images from low-resolution images. Transfer learning is applied in model training to compensate for the lack of training samples. Then, we test the performance of the generated super-resolution images in crop disease classification task. Extensive experiments show that using the fine-tuned ESRGAN model can recover realistic crop information and improve the accuracy of crop disease classification, compared with the other four image super-resolution methods.


2019 ◽  
Vol 9 (20) ◽  
pp. 4444
Author(s):  
Byunghyun Kim ◽  
Soojin Cho

In most hyperspectral super-resolution (HSR) methods, which are techniques used to improve the resolution of hyperspectral images (HSIs), the HSI and the target RGB image are assumed to have identical fields of view. However, because implementing these identical fields of view is difficult in practical applications, in this paper, we propose a HSR method that is applicable when an HSI and a target RGB image have different spatial information. The proposed HSR method first creates a low-resolution RGB image from a given HSI. Next, a histogram matching is performed on a high-resolution RGB image and a low-resolution RGB image obtained from an HSI. Finally, the proposed method optimizes endmember abundance of the high-resolution HSI towards the histogram-matched high-resolution RGB image. The entire procedure is evaluated using an open HSI dataset, the Harvard dataset, by adding spatial mismatch to the dataset. The spatial mismatch is implemented by shear transformation and cutting off the upper and left sides of the target RGB image. The proposed method achieved a lower error rate across the entire dataset, confirming its capability for super-resolution using images that have different fields of view.


2013 ◽  
Vol 2013 ◽  
pp. 1-19 ◽  
Author(s):  
Amir Pasha Mahmoudzadeh ◽  
Nasser H. Kashou

Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.


Author(s):  
M.J. Hennessy ◽  
E. Kwok

Much progress in nuclear magnetic resonance microscope has been made in the last few years as a result of improved instrumentation and techniques being made available through basic research in magnetic resonance imaging (MRI) technologies for medicine. Nuclear magnetic resonance (NMR) was first observed in the hydrogen nucleus in water by Bloch, Purcell and Pound over 40 years ago. Today, in medicine, virtually all commercial MRI scans are made of water bound in tissue. This is also true for NMR microscopy, which has focussed mainly on biological applications. The reason water is the favored molecule for NMR is because water is,the most abundant molecule in biology. It is also the most NMR sensitive having the largest nuclear magnetic moment and having reasonable room temperature relaxation times (from 10 ms to 3 sec). The contrast seen in magnetic resonance images is due mostly to distribution of water relaxation times in sample which are extremely sensitive to the local environment.


Author(s):  
Alan P. Koretsky ◽  
Afonso Costa e Silva ◽  
Yi-Jen Lin

Magnetic resonance imaging (MRI) has become established as an important imaging modality for the clinical management of disease. This is primarily due to the great tissue contrast inherent in magnetic resonance images of normal and diseased organs. Due to the wide availability of high field magnets and the ability to generate large and rapidly switched magnetic field gradients there is growing interest in applying high resolution MRI to obtain microscopic information. This symposium on MRI microscopy highlights new developments that are leading to increased resolution. The application of high resolution MRI to significant problems in developmental biology and cancer biology will illustrate the potential of these techniques.In combination with a growing interest in obtaining high resolution MRI there is also a growing interest in obtaining functional information from MRI. The great success of MRI in clinical applications is due to the inherent contrast obtained from different tissues leading to anatomical information.


2020 ◽  
Vol 13 (1) ◽  
pp. 71
Author(s):  
Zhiyong Xu ◽  
Weicun Zhang ◽  
Tianxiang Zhang ◽  
Jiangyun Li

Semantic segmentation is a significant method in remote sensing image (RSIs) processing and has been widely used in various applications. Conventional convolutional neural network (CNN)-based semantic segmentation methods are likely to lose the spatial information in the feature extraction stage and usually pay little attention to global context information. Moreover, the imbalance of category scale and uncertain boundary information meanwhile exists in RSIs, which also brings a challenging problem to the semantic segmentation task. To overcome these problems, a high-resolution context extraction network (HRCNet) based on a high-resolution network (HRNet) is proposed in this paper. In this approach, the HRNet structure is adopted to keep the spatial information. Moreover, the light-weight dual attention (LDA) module is designed to obtain global context information in the feature extraction stage and the feature enhancement feature pyramid (FEFP) structure is promoted and employed to fuse the contextual information of different scales. In addition, to achieve the boundary information, we design the boundary aware (BA) module combined with the boundary aware loss (BAloss) function. The experimental results evaluated on Potsdam and Vaihingen datasets show that the proposed approach can significantly improve the boundary and segmentation performance up to 92.0% and 92.3% on overall accuracy scores, respectively. As a consequence, it is envisaged that the proposed HRCNet model will be an advantage in remote sensing images segmentation.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document