image quality measures
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 2)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 12
Author(s):  
Wojciech Więcławek ◽  
Marta Danch-Wierzchowska ◽  
Marcin Rudzki ◽  
Bogumiła Sędziak-Marcinek ◽  
Slawomir Jan Teper

Ultra-widefield fluorescein angiography (UWFA) is an emerging imaging modality used to characterise pathologies in the retinal vasculature, such as microaneurysms (MAs) and vascular leakages. Despite its potential value for diagnosis and disease screening, objective quantitative assessment of retinal pathologies by UWFA is currently limited because laborious manual processing is required. In this report, we describe a geometrical method for uneven brightness compensation inherent to UWFA imaging technique. The correction function is based on the geometrical eyeball shape, therefore it is fully automated and depends only on pixel distance from the center of the imaged retina. The method’s performance was assessed on a database containing 256 UWFA images with the use of several image quality measures that show the correction method improves image quality. The method is also compared to the commonly used CLAHE approach and was also employed in a pilot study for vascular segmentation, giving a noticeable improvement in segmentation results. Therefore, the method can be used as an image preprocessing step in retinal UWFA image analysis.


Author(s):  
Seifedine Kadry ◽  
Venkatesan Rajinikanth ◽  
Jamin Koo ◽  
Byeong-Gwon Kang

<span>Image thresholding is a well approved pre-processing methodology and enhancing the image information based on a chosen threshold is always preferred. This research implements the mayfly optimization algorithm (MOA) based image multi-level-thresholding on a class of benchmark images of dimension 512x512x1. The MOA is a novel methodology with the algorithm phases, such as; i) Initialization, ii) Exploration with male-mayfly (MM), iii) Exploration with female-mayfly (FM), iv) Offspring generation and, v) Termination. This algorithm implements a strict two-step search procedure, in which every Mayfly is forced to attain the global best solution. The proposed research considers the threshold value from 2 to 5 and the superiority of the result is confirmed by computing the essential Image quality measures (IQM). The performance of MOA is also compared and validated against the other procedures, such as particle-swarm-optimization (PSO), bacterial foraging optimization</span><span>(BFO), </span><span lang="EN-IN">firefly-algorithm</span><span>(FA), bat algorithm (BA), cuckoo search</span><span>(CS) and moth-flame optimization (MFO) and the attained p-value of Wilcoxon rank test confirmed the superiority of the MOA compared with other algorithms considered in this work</span>


Author(s):  
M. Rajalakshmi ◽  
K. Annapurani

Image classification is a complicated process of classifying an image based on its visual representation. This paper portrays the need for adapting and applying a suitable image enhancement and denoising technique in order to arrive at a successful classification of data captured remotely. Biometric properties that are widely explored today are very important for authentication purposes. Noise may be the result of incorrect vein detection in the accepted image, thus explaining the need for a better development technique. This work provides subjective and objective analysis of the performance of various image enhancement filters in the spatial domain. After performing these pre-processing steps, the vein map and the corresponding vein graph can be easily obtained with minimal extraction steps, in which the appropriate Graph Matching method can be used to evaluate hand vein graphs thus performing the person authentication. The analysis result shows that the image enhancement filter performs better as an image enhancement filter compared to all other filters. Image quality measures (IQMs) are also tabulated for the evaluation of image quality.


Author(s):  
Venkatesan Rajinikanth ◽  
Nadaradjane Sri Madhava Raja ◽  
Nilanjan Dey

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
T. G. Wolf ◽  
F. Fischer ◽  
R. K. W. Schulze

Abstract To investigate potential correlations between objective CBCT image parameters and accuracy in endodontic working length determination ex vivo. Contrast-to-noise ratio (CNR) and spatial resolution (SR) as fundamental objective image parameters were examined using specific phantoms in seven different CBCT machines. Seven experienced observers were instructed and calibrated. The order of the CBCTs was randomized for each observer and observation. To assess intra-operator reproducibility, the procedure was repeated within six weeks with a randomized order of CBCT images. Multivariate analysis (MANOVA) did not reveal any influence of the combined image quality factors CNR and SR on measurement accuracy. Inter-operator reproducibility as assessed between the two observations was poor, with a mean intra-class correlation (ICC) of 0.48 (95%-CI  0.38, 0.59) for observation No. 1. and 0.40 (95%-CI 0.30, 0.51) for observation No. 2. Intra-operator reproducibility pooled over all observers between both observations was only moderate, with a mean ICC of 0.58 (95%-CI 0.52 to 0.64). Within the limitations of the study, objective image quality measures and exposure parameters seem not to have a significant influence on accuracy in determining endodontic root canal lengths in CBCT scans. The main factor of variance is the observer.


2020 ◽  
Vol 6 (10) ◽  
pp. 102
Author(s):  
Jane Courtney

The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for the high-quality digitization of content from pages and whiteboards to sharable online material. However, capturing this information is not always easy nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop, and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing, and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work, and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use.


Author(s):  
Pontus Andersson ◽  
Jim Nilsson ◽  
Tomas Akenine-Möller ◽  
Magnus Oskarsson ◽  
Kalle Åström ◽  
...  

Image quality measures are becoming increasingly important in the field of computer graphics. For example, there is currently a major focus on generating photorealistic images in real time by combining path tracing with denoising, for which such quality assessment is integral. We present FLIP, which is a difference evaluator with a particular focus on the differences between rendered images and corresponding ground truths. Our algorithm produces a map that approximates the difference perceived by humans when alternating between two images. FLIP is a combination of modified existing building blocks, and the net result is surprisingly powerful. We have compared our work against a wide range of existing image difference algorithms and we have visually inspected over a thousand image pairs that were either retrieved from image databases or generated in-house. We also present results of a user study which indicate that our method performs substantially better, on average, than the other algorithms. To facilitate the use of FLIP, we provide source code in C++, MATLAB, NumPy/SciPy, and PyTorch.


Author(s):  
Jane Courtney

The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for high-quality digitization of content from pages and whiteboards to sharable online material. But capturing this information is not always easy, nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts, by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use.


Sign in / Sign up

Export Citation Format

Share Document