scholarly journals Deep Learning for Ultra-Widefield Imaging: A Scoping Review

Author(s):  
Nishaant Bhambra ◽  
Fares Antaki ◽  
Farida El Malt ◽  
AnQi Xu ◽  
Renaud Duval

Abstract Purpose: This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. Methods: A literature search was performed up to August 31st, 20201 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) Deep Learning, (2) Ultra-Widefield Imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints) (3) no full-text availability (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration.Results: A total of 36 studies were included. 23 studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of UWF images, 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWFI.Conclusion: The application of DL to UWFI has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has been used with UWFI to also diagnose systemic diseases like Alzheimer’s, and also applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWFI, and the future of DL applications in this field.

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Guangyi Yang ◽  
Xingyu Ding ◽  
Tian Huang ◽  
Kun Cheng ◽  
Weizheng Jin

Abstract Communications industry has remarkably changed with the development of fifth-generation cellular networks. Image, as an indispensable component of communication, has attracted wide attention. Thus, finding a suitable approach to assess image quality is important. Therefore, we propose a deep learning model for image quality assessment (IQA) based on explicit-implicit dual stream network. We use frequency domain features of kurtosis based on wavelet transform to represent explicit features and spatial features extracted by convolutional neural network (CNN) to represent implicit features. Thus, we constructed an explicit-implicit (EI) parallel deep learning model, namely, EI-IQA model. The EI-IQA model is based on the VGGNet that extracts the spatial domain features. On this basis, the number of network layers of VGGNet is reduced by adding the parallel wavelet kurtosis value frequency domain features. Thus, the training parameters and the sample requirements decline. We verified, by cross-validation of different databases, that the wavelet kurtosis feature fusion method based on deep learning has a more complete feature extraction effect and a better generalisation ability. Thus, the method can simulate the human visual perception system better, and subjective feelings become closer to the human eye. The source code about the proposed EI-IQA model is available on github https://github.com/jacob6/EI-IQA.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Wenxin Yu ◽  
Xuewen Zhang ◽  
Yunye Zhang ◽  
Zhiqiang Zhang ◽  
Jinjia Zhou

2020 ◽  
Vol 29 (04) ◽  
pp. 1
Author(s):  
Yin Zhang ◽  
Junhua Yan ◽  
Xuan Du ◽  
Xuehan Bai ◽  
Xiyang Zhi ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document