image sharpness
Recently Published Documents


TOTAL DOCUMENTS

259
(FIVE YEARS 71)

H-INDEX

26
(FIVE YEARS 3)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 148
Author(s):  
Kyungsoo Bae ◽  
Kyung Nyeo Jeon ◽  
Moon Jung Hwang ◽  
Yunsub Jung ◽  
Joonsung Lee

(1) Background: Highly flexible adaptive image receive (AIR) coil has become available for clinical use. The present study aimed to evaluate the performance of AIR anterior array coil in lung MR imaging using a zero echo time (ZTE) sequence compared with conventional anterior array (CAA) coil. (2) Methods: Sixty-six patients who underwent lung MR imaging using both AIR coil (ZTE-AIR) and CAA coil (ZTE-CAA) were enrolled. Image quality of ZTE-AIR and ZTE-CAA was quantified by calculating blur metric value, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) of lung parenchyma. Image quality was qualitatively assessed by two independent radiologists. Lesion detection capabilities for lung nodules and emphysema and/or lung cysts were evaluated. Patients’ comfort levels during examinations were assessed. (3) Results: SNR and CNR of lung parenchyma were higher (both p < 0.001) in ZTE-AIR than in ZTE-CAA. Image sharpness was superior in ZTE-AIR (p < 0.001). Subjective image quality assessed by two independent readers was superior (all p < 0.05) in ZTE-AIR. AIR coil was preferred by 64 of 66 patients. ZTE-AIR showed higher (all p < 0.05) sensitivity for sub-centimeter nodules than ZTE-CAA by both readers. ZTE-AIR showed higher (all p < 0.05) sensitivity and accuracy for detecting emphysema and/or cysts than ZTE-CAA by both readers. (4) Conclusions: The use of highly flexible AIR coil in ZTE lung MR imaging can improve image quality and patient comfort. Application of AIR coil in parenchymal imaging has potential for improving delineation of low-density parenchymal lesions and tiny nodules.


2021 ◽  
Vol 14 (1) ◽  
pp. 24
Author(s):  
Yuan Hu ◽  
Lei Chen ◽  
Zhibin Wang ◽  
Xiang Pan ◽  
Hao Li

Deep-learning-based radar echo extrapolation methods have achieved remarkable progress in the precipitation nowcasting field. However, they suffer from a common notorious problem—they tend to produce blurry predictions. Although some efforts have been made in recent years, the blurring problem is still under-addressed. In this work, we propose three effective strategies to assist deep-learning-based radar echo extrapolation methods to achieve more realistic and detailed prediction. Specifically, we propose a spatial generative adversarial network (GAN) and a spectrum GAN to improve image fidelity. The spatial and spectrum GANs aim at penalizing the distribution discrepancy between generated and real images from the spatial domain and spectral domain, respectively. In addition, a masked style loss is devised to further enhance the details by transferring the detailed texture of ground truth radar sequences to extrapolated ones. We apply a foreground mask to prevent the background noise from transferring to the outputs. Moreover, we also design a new metric termed the power spectral density score (PSDS) to quantify the perceptual quality from a frequency perspective. The PSDS metric can be applied as a complement to other visual evaluation metrics (e.g., LPIPS) to achieve a comprehensive measurement of image sharpness. We test our approaches with both ConvLSTM baseline and U-Net baseline, and comprehensive ablation experiments on the SEVIR dataset show that the proposed approaches are able to produce much more realistic radar images than baselines. Most notably, our methods can be readily applied to any deep-learning-based spatiotemporal forecasting models to acquire more detailed results.


2021 ◽  
Vol 8 ◽  
Author(s):  
Malene Bisgaard ◽  
Fintan J. McEvoy ◽  
Dorte Hald Nielsen ◽  
Clara Allberg ◽  
Anna V. Müller ◽  
...  

Introduction: The purpose of this study was to evaluate the effect of collimation on image quality and radiation dose to the eye lenses of the personnel involved in computed radiography of the canine pelvis.Materials and Methods: A retrospective study of canine pelvic radiographs (N = 54) was undertaken to evaluate the relationship between image quality and the degree of field the collimation used. This was followed by a prospective cadaver study (N = 18) that assessed the effects on image quality and on scattered radiation dose of different collimation field areas and exposure parameters. All radiographs were analyzed for image quality using a Visual Grading Analysis (VGA) with three observers. Finally, the potential scattered radiation dose to the eye lens of personnel restraining a dog for pelvic radiographs was measured.Results: The retrospective study showed a slightly better (statistically non-significant) VGA score for the radiographs with optimal collimation. Spatial and contrast resolution and image sharpness showed the greatest improvement in response to minimizing the collimation field. The prospective study showed slightly better VGA scores (improved image quality) with the optimal collimation. Increasing the exposure factors especially the tube current and exposure time (mAs) resulted in improved low contrast resolution and less noise in the radiographs. The potential eye lens radiation dose increased by 14, 28, and 40% [default exposures, increased the tube peak potential (kVp), increased mAs, respectively] as a result of reduced collimation (increased beam size).Conclusion: The degree of collimation has no statistically significant on image quality in canine pelvic radiology for the range of collimation used but does have an impact on potential radiation dose to personnel in the x-ray room. With regard to radiation safety, increases in kVp are associated with less potential scatter radiation exposure compared to comparable increases in mAs.


Cancers ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 5497
Author(s):  
Raymond J. Acciavatti ◽  
Eric A. Cohen ◽  
Omid Haji Maghsoudi ◽  
Aimilia Gastounioti ◽  
Lauren Pantalone ◽  
...  

Digital mammography has seen an explosion in the number of radiomic features used for risk-assessment modeling. However, having more features is not necessarily beneficial, as some features may be overly sensitive to imaging physics (contrast, noise, and image sharpness). To measure the effects of imaging physics, we analyzed the feature variation across imaging acquisition settings (kV, mAs) using an anthropomorphic phantom. We also analyzed the intra-woman variation (IWV), a measure of how much a feature varies between breasts with similar parenchymal patterns—a woman’s left and right breasts. From 341 features, we identified “robust” features that minimized the effects of imaging physics and IWV. We also investigated whether robust features offered better case-control classification in an independent data set of 575 images, all with an overall BI-RADS® assessment of 1 (negative) or 2 (benign); 115 images (cases) were of women who developed cancer at least one year after that screening image, matched to 460 controls. We modeled cancer occurrence via logistic regression, using cross‑validated area under the receiver-operating-characteristic curve (AUC) to measure model performance. Models using features from the most-robust quartile of features yielded an AUC = 0.59, versus 0.54 for the least-robust, with p < 0.005 for the difference among the quartiles.


2021 ◽  
Vol 58 (11) ◽  
pp. 684-696
Author(s):  
P. Krawczyk ◽  
A. Jansche ◽  
T. Bernthaler ◽  
G. Schneider

Abstract Image-based qualitative and quantitative structural analyses using high-resolution light microscopy are integral parts of the materialographic work on materials and components. Vibrations or defocusing often result in blurred image areas, especially in large-scale micrographs and at high magnifications. As the robustness of the image-processing analysis methods is highly dependent on the image grade, the image quality directly affects the quantitative structural analysis. We present a deep learning model which, when using appropriate training data, is capable of increasing the image sharpness of light microscope images. We show that a sharpness correction for blurred images can successfully be performed using deep learning, taking the examples of steels with a bainitic microstructure, non-metallic inclusions in the context of steel purity degree analyses, aluminumsilicon cast alloys, sintered magnets, and lithium-ion batteries. We furthermore examine whether geometric accuracy is ensured in the artificially resharpened images.


2021 ◽  
Vol 11 (21) ◽  
pp. 9802
Author(s):  
Jeong-Min Shim ◽  
Young-Bo Kim ◽  
Chang-Ki Kang

This study aims to introduce a new compressed sensing averaging (CSA) technique for the reduction of blurring and/or ringing artifacts, depending on the k-space sampling ratio. A full k-space dataset and three randomly undersampled datasets were obtained for CSA images in a brain phantom and a healthy subject. An additional simulation was performed to assess the effect of the undersampling ratio on the images and the signal-to-noise ratios (SNRs). The image sharpness, spatial resolution, and contrast between tissues were analyzed and compared with other CSA techniques. Compared to CSA with multiple acquisition (CSAM) at 25%, 35%, and 45% undersampling, the reduction rates of the k-space lines of CSA with keyhole (CSAK) were 10%, 15%, and 22%, respectively, and the acquisition time was reduced by 16%, 23%, and 32%, respectively. In the simulation performed with a full sampling k-space dataset, the SNR decreased to 10.41, 9.80, and 8.86 in the white matter and 9.69, 9.35, and 8.46 in the gray matter, respectively. In addition, the ringing artifacts became substantially more predominant as the number of sampling lines decreased. The 50% modulation transfer functions were 0.38, 0.43, and 0.54 line pairs per millimeter for CSAM, CSAK with high-frequency sharing (CSAKS), and CSAK with high-frequency copying (CSAKC), respectively. In this study, we demonstrated that the smaller the sampling line, the more severe the ringing artifact, and that the CSAKC technique proposed to overcome the artifacts that occur when using CSA techniques did not generate artifacts, while it increased spatiotemporal resolution.


Author(s):  
Ying Zhang ◽  
Guangyu Su ◽  
Kai Hu ◽  
Yuanwei Li ◽  
Di Tang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document