intensity normalization
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 26)

H-INDEX

18
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Matthew Brier ◽  
biao xiang ◽  
Zhuocheng Li ◽  
Robert Naismith ◽  
Dmitriy Yablonskiy ◽  
...  

Assessment of intrinsic tissue integrity is commonly accomplished via quantitative relaxometry or other specialized imaging, which requires sequences and analysis procedures not routinely available in clinical settings. We detail an alternative technique for extraction of quantitative tissue biomarkers based on intensity normalization of T1- and T2-weighted images. We develop the theoretical underpinnings of this approach and demonstrate its utility in imaging of multiple sclerosis.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Xiaoting Zhong ◽  
Brian Gallagher ◽  
Keenan Eves ◽  
Emily Robertson ◽  
T. Nathan Mundhenk ◽  
...  

AbstractMachine-learning (ML) techniques hold the potential of enabling efficient quantitative micrograph analysis, but the robustness of ML models with respect to real-world micrograph quality variations has not been carefully evaluated. We collected thousands of scanning electron microscopy (SEM) micrographs for molecular solid materials, in which image pixel intensities vary due to both the microstructure content and microscope instrument conditions. We then built ML models to predict the ultimate compressive strength (UCS) of consolidated molecular solids, by encoding micrographs with different image feature descriptors and training a random forest regressor, and by training an end-to-end deep-learning (DL) model. Results show that instrument-induced pixel intensity signals can affect ML model predictions in a consistently negative way. As a remedy, we explored intensity normalization techniques. It is seen that intensity normalization helps to improve micrograph data quality and ML model robustness, but microscope-induced intensity variations can be difficult to eliminate.


2021 ◽  
Author(s):  
Nilarun Mukherjee ◽  
Souvik Sengupta

Abstract Background: Diabetic retinopathy (DR) is a complication of diabetes mellitus, which if left untreated may lead to complete vision loss. Early diagnosis and treatment is the key to prevent further complications of DR. Computer-aided diagnosis is a very effective method to support ophthalmologists, as manual inspection of pathological changes in retina images are time consuming and expensive. In recent times, Machine Learning and Deep Learning techniques have subsided conventional rule based approaches for detection, segmentation and classification of DR stages and lesions in fundus images. Method: In this paper, we present a comparative study of the different state-of-the-art preprocessing methods that have been used in deep learning based DR classification tasks in recent times and also propose a new unsupervised learning based retinal region extraction technique and new combinations of preprocessing pipelines designed on top of it. Efficacy of different existing and new combinations of the preprocessing methods are analyzed using two publicly available retinal datasets (EyePACS and APTOS) for different DR stage classification tasks, such as referable DR, DR screening, and five-class DR grading, using a benchmark deep learning model (ResNet-50). Results: It has been observed that the proposed preprocessing strategy composed of region of interest extraction through K-means clustering followed by contrast and edge enhancement using Graham’s method and z-score intensity normalization achieved the highest accuracy of 98.5%, 96.51% and 90.59% in DR-screening, referable-DR, and DR gradation tasks respectively and also achieved the best quadratic weighted kappa score of 0.945 in DR grading task. It achieved best AUC-ROC of 0.98 and 0.9981 in DR grading and DR screening tasks respectively. Conclusion: It is evident from the results that the proposed preprocessing pipeline composed of the proposed ROI extraction through K-means clustering, followed by edge and contrast enhancement using Graham’s method and then z-score intensity normalization outperforms all other existing preprocessing pipelines and has proven to be the most effective preprocessing strategy in helping the baseline CNN model to extract meaningful deep features.


Cancers ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 3000
Author(s):  
Yingping Li ◽  
Samy Ammari ◽  
Corinne Balleyguier ◽  
Nathalie Lassau ◽  
Emilie Chouzenoux

In brain MRI radiomics studies, the non-biological variations introduced by different image acquisition settings, namely scanner effects, affect the reliability and reproducibility of the radiomics results. This paper assesses how the preprocessing methods (including N4 bias field correction and image resampling) and the harmonization methods (either the six intensity normalization methods working on brain MRI images or the ComBat method working on radiomic features) help to remove the scanner effects and improve the radiomic feature reproducibility in brain MRI radiomics. The analyses were based on in vitro datasets (homogeneous and heterogeneous phantom data) and in vivo datasets (brain MRI images collected from healthy volunteers and clinical patients with brain tumors). The results show that the ComBat method is essential and vital to remove scanner effects in brain MRI radiomic studies. Moreover, the intensity normalization methods, while not able to remove scanner effects at the radiomic feature level, still yield more comparable MRI images and improve the robustness of the harmonized features to the choice among ComBat implementations.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3953
Author(s):  
Han Pu ◽  
Tianqiang Huang ◽  
Bin Weng ◽  
Feng Ye ◽  
Chenbin Zhao

Digital video forensics plays a vital role in judicial forensics, media reports, e-commerce, finance, and public security. Although many methods have been developed, there is currently no efficient solution to real-life videos with illumination noises and jitter noises. To solve this issue, we propose a detection method that adapts to brightness and jitter for video inter-frame forgery. For videos with severe brightness changes, we relax the brightness constancy constraint and adopt intensity normalization to propose a new optical flow algorithm. For videos with large jitter noises, we introduce motion entropy to detect the jitter and extract the stable feature of texture changes fraction for double-checking. Experimental results show that, compared with previous algorithms, the proposed method is more accurate and robust for videos with significant brightness variance or videos with heavy jitter on public benchmark datasets.


Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 816
Author(s):  
Kuei-Yuan Hou ◽  
Hao-Yuan Lu ◽  
Ching-Ching Yang

This study aimed to facilitate pseudo-CT synthesis from MRI by normalizing MRI intensity of the same tissue type to a similar intensity level. MRI intensity normalization was conducted through dividing MRI by a shading map, which is a smoothed ratio image between MRI and a three-intensity mask. Regarding pseudo-CT synthesis from MRI, a conversion model based on a three-layer convolutional neural network was trained and validated. Before MRI intensity normalization, the mean value ± standard deviation of fat tissue in 0.35 T chest MRI was 297 ± 73 (coefficient of variation (CV) = 24.58%), which was 533 ± 91 (CV = 17.07%) in 1.5 T abdominal MRI. The corresponding results were 149 ± 32 (CV = 21.48%) and 148 ± 28 (CV = 18.92%) after intensity normalization. With regards to pseudo-CT synthesis from MRI, the differences in mean values between pseudo-CT and real CT were 3, 15, and 12 HU for soft tissue, fat, and lung/air in 0.35 T chest imaging, respectively, while the corresponding results were 3, 14, and 15 HU in 1.5 T abdominal imaging. Overall, the proposed workflow is reliable in pseudo-CT synthesis from MRI and is more practicable in clinical routine practice compared with deep learning methods, which demand a high level of resources for building a conversion model.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
A. Verger ◽  
M. Doyen ◽  
J. Y. Campion ◽  
Eric Guedj

Abstract Background The objective of the study is to define the most appropriate region for intensity normalization in brain 18FDG PET semi-quantitative analysis. The best option could be based on previous absolute quantification studies, which showed that the metabolic changes related to ageing affect the quasi-totality of brain regions in healthy subjects. Consequently, brain metabolic changes related to ageing were evaluated in two populations of healthy controls who underwent conventional (n = 56) or digital (n = 78) 18FDG PET/CT. The median correlation coefficients between age and the metabolism of each 120 atlas brain region were reported for 120 distinct intensity normalizations (according to the 120 regions). SPM linear regression analyses with age were performed on most significant normalizations (FWE, p < 0.05). Results The cerebellum and pons were the two sole regions showing median coefficients of correlation with age less than − 0.5. With SPM, the intensity normalization by the pons provided at least 1.7- and 2.5-fold more significant cluster volumes than other normalizations for conventional and digital PET, respectively. Conclusions The pons is the most appropriate area for brain 18FDG PET intensity normalization for examining the metabolic changes through ageing.


Author(s):  
Allison M. Thompson ◽  
Kelly G. Stratton ◽  
Lisa M. Bramer ◽  
Nicole S. Zavoshy ◽  
Lee Ann McCue

Author(s):  
Thomas DeSilvio ◽  
Stefania Moroianu ◽  
Indrani Bhattacharya ◽  
Arun Seetharaman ◽  
Geoffrey Sonn ◽  
...  

2021 ◽  
Author(s):  
Antoine Verger ◽  
Matthieu Doyen ◽  
Jacques-Yves Campion ◽  
Eric Guedj

Abstract Background: To define the most appropriate region for intensity normalization in brain 18FDG PET analysis through ageing.Brain metabolic changes related to ageing were evaluated in two populations of healthy controls who underwent conventional (n=56) or digital (n=78) 18FDG PET/CT. The median correlation coefficients between age and the metabolism of each 120 atlas brain region were reported for 120 distinct intensity normalizations (according to the 120 regions). SPM linear regression analyses with age were performed on most significant normalizations (FWE, p<0.05).Results: The cerebellum and pons were the two sole regions showing median coefficients of correlation with age less than -0.5. With SPM, the intensity normalization through the pons provided at least 1.7- and 2.5-fold more significant cluster volumes than other normalizations for conventional and digital PET respectively. Conclusions: The pons is the most appropriate area for brain 18FDG PET intensity normalization for examining the metabolic changes through ageing.


Sign in / Sign up

Export Citation Format

Share Document