An Image Quality Assessment System for Evaluating MR Reconstruction Pipeline Using Single Image Acquisition Method

2020 ◽  
Author(s):  
Sibi S ◽  
Thara S Pillai ◽  
Pournami S Chandran ◽  
Nisha Kumari N ◽  
Ranjith K O ◽  
...  
IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Wenxin Yu ◽  
Xuewen Zhang ◽  
Yunye Zhang ◽  
Zhiqiang Zhang ◽  
Jinjia Zhou

2019 ◽  
Vol 19 (05) ◽  
pp. 1950030 ◽  
Author(s):  
XUEWEI WANG ◽  
SHULIN ZHANG ◽  
XIAO LIANG ◽  
CHUN ZHENG ◽  
JINJIN ZHENG ◽  
...  

Oculopathy is a widespread disease among people of all ages around the world. Teleophthalmology can facilitate the ophthalmological diagnosis for less developed countries that lack medical resources. In teleophthalmology, the assessment of retinal image quality is of great importance. In this paper, we propose a no-reference retinal image assessment system based on DenseNet, a convolutional neural network architecture. This system classified fundus images into good and bad quality or five categories: adequate, just noticeable blur, inappropriate illumination, incomplete optic disc, and opacity. The proposed system was evaluated on different datasets and compared to the applications based on other two networks: VGG-16 and GoogLenet. For binary classification, the good-and-bad binary classifier achieves an AUC of 1.000, and the degradation-specified classifiers that distinguish one specified degradation versus the rest achieve AUC values of 0.972, 0.990, 0.982, 0.982 for four categories, respectively. The multi-classification based on DenseNet achieves an overall accuracy of 0.927, which is significantly higher than 0.549 and 0.757 obtained using VGG-16 and GoogLeNet, respectively. The experimental results indicate that the proposed approach produces outstanding performance in retinal image quality assessment and is worth applying in ophthalmological telemedicine applications. In addition, the proposed approach is robust to the image noise. This study fills the gap of multi-classification in retinal image quality assessment.


2020 ◽  
Vol 39 (6) ◽  
pp. 8543-8555
Author(s):  
Azamossadat Nourbakhsh ◽  
Mohammad-Shahram Moin ◽  
Arash Sharifi

Face is the most important and most popular biometric used in many identification and verification systems. In these systems, for reducing recognition error rate, the quality of input images need to be as high as possible. Face Image Compliancy verification (FICV) is one of the most essential methods for this purpose. In this research, a brain functionality inspired model is presented for FICV using Haxby model, which is a face visual perception consistent model containing three bilateral areas for three different functionalities. As a result, contribution of this work is presenting a new model, based on human brain functionality, improving the compliancy verification of face images in FICV context. Perceptual understanding of an image is the motivation of most of the quality assessment methods, i.e., the human quality perception is considered as a gold standard and a perfect reference for recognition and quality assessment. The model presented in this work aims to make the operational process of face image quality assessment system closer to the performance of a human expert. Three basic modules have been introduced. Face structural information, for initial information encoding, is simulated by an extended Viola-Jones model. Face image quality assessment is presented by International Civil Aviation Organization (ICAO), in ICAO (ISO / IEC19794 -11) requirements’ compliancy assessment document. Like Haxby model, perception is performed through two distinct functional and neurological pathways, using Hierarchical Maximum pooling (HMAX) and Convolutional Deep Belief Networks (CDBN). Information storing and fetching for training are similar to their corresponding modules in brain. For simulating the brain decision making, the final results of two separate paths are integrated by weighting sum operator. Nine ISO / ICAO requirements were used for testing the model. The simulation results, using AR and PUT databases, shows improvements in six requirements using the proposed method, in comparison with the FICV benchmark.


Sign in / Sign up

Export Citation Format

Share Document