image intensity
Recently Published Documents


TOTAL DOCUMENTS

235
(FIVE YEARS 44)

H-INDEX

23
(FIVE YEARS 3)

2022 ◽  
Vol 15 ◽  
Author(s):  
Dong Li ◽  
Guangyu Wang ◽  
René Werner ◽  
Hong Xie ◽  
Ji-Song Guan ◽  
...  

High-resolution functional 2-photon microscopy of neural activity is a cornerstone technique in current neuroscience, enabling, for instance, the image-based analysis of relations of the organization of local neuron populations and their temporal neural activity patterns. Interpreting local image intensity as a direct quantitative measure of neural activity presumes, however, a consistent within- and across-image relationship between the image intensity and neural activity, which may be subject to interference by illumination artifacts. In particular, the so-called vignetting artifact—the decrease of image intensity toward the edges of an image—is, at the moment, widely neglected in the context of functional microscopy analyses of neural activity, but potentially introduces a substantial center-periphery bias of derived functional measures. In the present report, we propose a straightforward protocol for single image-based vignetting correction. Using immediate-early gene-based 2-photon microscopic neural image data of the mouse brain, we show the necessity of correcting both image brightness and contrast to improve within- and across-image intensity consistency and demonstrate the plausibility of the resulting functional data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Soo Kweon Lee ◽  
Ju Hun Lee ◽  
Hyeong Ryeol Kim ◽  
Youngsang Chun ◽  
Ja Hyun Lee ◽  
...  

AbstractThe microbial food fermentation industry requires real-time monitoring and accurate quantification of cells. However, filamentous fungi are difficult to quantify as they have complex cell types such as pellet, spores, and dispersed hyphae. In this study, numerous data of microscopic image intensity (MII) were used to develop a simple and accurate quantification method of Cordyceps mycelium. The dry cell weight (DCW) of the sample collected during the fermentation was measured. In addition, the intensity values were obtained through the ImageJ program after converting the microscopic images. The prediction model obtained by analyzing the correlation between MII and DCW was evaluated through a simple linear regression method and found to be statistically significant (R2 = 0.941, p < 0.001). In addition, validation with randomly selected samples showed significant accuracy, thus, this model is expected to be used as a valuable tool for predicting and quantifying fungal growth in various industries.


Author(s):  
Trang Thi Ngoc Tran ◽  
David Shih-Chun Jin ◽  
Kun-Long Shih ◽  
Ming-Lun Hsu ◽  
Jyh-Cheng Chen

Abstract Purpose Cone-beam computed tomography (CBCT) has been widely applied in dental and maxillofacial imaging. Several dental CBCT systems have been recently developed in order to improve the performance. This study aimed to evaluate the image quality of our prototype (YMU-DENT-P001) and compare with a commercial POYE Expert 3DS dental CBCT system (system A). Methods The Micro-CT Contrast Scale, Micro-CT Water and Micro-CT HA phantoms were used to evaluate the two CBCT systems in terms of contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), uniformity (U), distortion, and linearity in the relationship between image intensity and calcium hydroxyapatite concentration. We also fabricated a proprietary thin-wire phantom to evaluate full width at half maximum (FWHM) spatial resolution. Both CBCT systems used the same exposure protocol, and data analysis was performed in accordance with ISO standards using a proprietary image analysis platform. Results The SNR of our prototype system was nearly five times higher than that of system A (prototype: 159.85 ± 3.88; A: 35.42 ± 0.61; p < 0.05) and the CNR was three times higher (prototype: 329.39 ± 5.55; A: 100.29 ± 2.31; p < 0.05). The spatial resolution of the prototype (0.2446 mm) greatly exceeded that of system A (0.5179 mm) and image distortion was lower (prototype: 0.03 mm; system A: 0.285 mm). Little difference was observed between the two systems in terms of the linear relationship between bone mineral density (BMD) and image intensity. Conclusions Within the scope of this study, our prototype YMU-DENT-P001 outperformed system A in terms of spatial resolution, SNR, CNR, and image distortion.


2021 ◽  
Author(s):  
Soo Kweon Lee ◽  
Ju Hun Lee ◽  
Hyeong Ryeol Kim ◽  
Youngsang Chun ◽  
Ja Hyun Lee ◽  
...  

Abstract The microbial food fermentation industry requires real-time monitoring and accurate quantification of cells. However, filamentous fungi are difficult to quantify as they have complex cell types such as pellet, spores, and dispersed hyphae. In this study, numerous data of microscopic image intensity (MII) were used to develop a simple and accurate quantification method of Cordyceps mycelium. The dry cell weight (DCW) of the sample collected during the fermentation was measured. In addition, the intensity values were obtained through the ImageJ program after converting the microscopic images. The prediction model obtained by analyzing the correlation between MII and DCW was evaluated through a simple linear regression method and found to be statistically significant (R2 = 0.941, p <0.001). In addition, validation with randomly selected samples showed significant accuracy, thus, this model is expected to be used as a valuable tool for predicting and quantifying fungal growth in various industries.


Author(s):  
Daniel T Huff ◽  
Peter Ferjancic ◽  
Mauro Namías ◽  
Hamid Emamekhoo ◽  
Scott Perlman ◽  
...  

Art History ◽  
2021 ◽  
Vol 44 (4) ◽  
pp. 890-893
Author(s):  
Jae Emerling
Keyword(s):  

2021 ◽  
Author(s):  
Laurin Mordhorst ◽  
Maria Morozova ◽  
Sebastian Papazoglou ◽  
Bjoern Fricke ◽  
Jan Malte Oeschger ◽  
...  

Non-invasive assessment of axon radii via MRI bears great potential for clinical and neuroscience research as it is a main determinant of the neuronal conduction velocity. However, there is a lack of representative histological reference data on the scale of the cross-section of MRI voxels for validating the MRI-visible, effective radius (reff). Because the current gold standard stems from neuroanatomical studies designed to estimate the frequency-weighted arithmetic mean radius (rarith) on small ensembles of axons, it is unsuited to estimate the tail-weighted reff. We propose CNN-based segmentation on high-resolution, large-scale light microscopy (lsLM) data to generate a representative reference for reff. In a human corpus callosum, we assessed estimation accuracy and bias of rarith and reff. Furthermore, we investigated whether mapping anatomy-related variation of rarith and reff is confounded by low-frequency variation of the image intensity, e.g., due to staining heterogeneity. Finally, we analyzed the potential error due to outstandingly large axons in reff. Compared to rarith, reff was estimated with higher accuracy (normalized-root-mean-square-error of reff: 7.2 %; rarith: 21.5 %) and lower bias (normalized-mean-bias-error of reff: -1.7 %; rarith: 16 %). While rarith was confounded by variation of the image intensity, variation of reff seemed anatomy-related. The largest axons contributed between 0.9 % and 3 % to reff. In conclusion, the proposed method accurately estimates reff at MRI voxel resolution across a human corpus callosum sample. Further investigations are required to assess generalization to brain areas with different axon radii ensembles.


Author(s):  
CHI ZHANG ◽  
YUXIN LIU ◽  
LIN YUAN ◽  
XIAOXU HOU

Standard instrument for the clinical diagnosis of sleep apnea is large and based on invasive method, which is not comfortable and not suitable for daily inspection. A video-based measurement method for the respiration rate (RR) is therefore proposed, which is meaningful to the home diagnosis of sleep apnea. We proposed a novel method for the visualization and calculation of RR from a video containing a sleeping person. The video was decomposed by spatio-temporal Laplacian pyramid method into multiresolution image sequences, which were filtered by an infinite-impulse-response bandpass filter to extract the respiration movement in the video. The respiration movement was amplified, and fused into the original video. On the other hand, the signal intensity of the filtering results was compared between layers of Laplacian pyramid to identify the layer with the strongest movement caused by respiration. A morphological calculation was conducted on the image reshaped from the filtered results in this layer, to find the region of interest (ROI) with most significant movement of respiration. The image intensity in the ROI was spatially averaged into a one-dimensional signal, of which the frequency domain was analyzed to obtain RR. The ROI and the calculation results for RR were visualized on the video with enhanced respiration movement. Ten videos lasting 30–60[Formula: see text]s were recorded by a general webcam. The respiration movement of the subject was successfully extracted and amplified, no matter the posture was supine or side lying. The thoracic and abdominal parts were generally identified as ROI in all postures. RR was calculated by the frequency domain analysis for the averaged image intensity in ROI with the error no more than 1 time per minute, and further, as well as ROI, was fused into the amplified video. The region of respiration movement and RR is calculated by the noncontact method, and well visualized in a video. The method provides a novel screening tool for the population suspected of sleep apnea, and is meaningful to the home diagnosis of sleep illness.


2021 ◽  
Author(s):  
Samuel A. Bobholz ◽  
Allison K. Lowman ◽  
Michael Brehler ◽  
Fitzgerald Kyereme ◽  
Savannah R. Duenweg ◽  
...  

AbstractCurrent MRI signatures of brain cancer often fail to identify regions of hypercellularity beyond the contrast enhancing region. Therefore, this study used autopsy tissue samples aligned to clinical MRIs in order to quantify the relationship between intensity values and cellularity, as well as to develop a radio-pathomic model to predict cellularity using MRI data. This study used 93 samples collected at autopsy from 44 brain cancer patients. Tissue samples were processed, stained for hematoxylin and eosin (HE) and digitized for nuclei segmentation and cell density calculation. Pre- and post-gadolinium contrast T1-weighted images (T1, T1C), T2 fluid-attenuated inversion recovery (FLAIR) images, and apparent diffusion coefficient (ADC) images calculated from diffusion imaging were collected from each patients’ final acquisition prior to death. In-house software was used to align tissue samples to the FLAIR image via manually defined control points. Mixed effect models were used to assess the relationship between single image intensity and cellularity for each image. An ensemble learner was trained to predict cellularity using 5 by 5 voxel tiles from each image, employing a 2/3-1/3 train-test split for validation. Single image analyses found subtle associations between image intensity and cellularity, with a less pronounced relationship within GBM patients. The radio-pathomic model was able to accurately predict cellularity in the test set (RMSE = 1015 cells/mm2) and identified regions of hypercellularity beyond the contrast enhancing region. We concluded that a radio-pathomic model for cellularity is able to identify regions of hypercellular tumor beyond traditional imaging signatures.


Sign in / Sign up

Export Citation Format

Share Document