scholarly journals Elevating Clinical Brain and Spine MR Image Quality with Deep Learning Reconstruction

Author(s):  
Lawrence Tanenbaum
2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Author(s):  
Luuk J. Oostveen ◽  
Frederick J. A. Meijer ◽  
Frank de Lange ◽  
Ewoud J. Smit ◽  
Sjoert A. Pegge ◽  
...  

Abstract Objectives To evaluate image quality and reconstruction times of a commercial deep learning reconstruction algorithm (DLR) compared to hybrid-iterative reconstruction (Hybrid-IR) and model-based iterative reconstruction (MBIR) algorithms for cerebral non-contrast CT (NCCT). Methods Cerebral NCCT acquisitions of 50 consecutive patients were reconstructed using DLR, Hybrid-IR and MBIR with a clinical CT system. Image quality, in terms of six subjective characteristics (noise, sharpness, grey-white matter differentiation, artefacts, natural appearance and overall image quality), was scored by five observers. As objective metrics of image quality, the noise magnitude and signal-difference-to-noise ratio (SDNR) of the grey and white matter were calculated. Mean values for the image quality characteristics scored by the observers were estimated using a general linear model to account for multiple readers. The estimated means for the reconstruction methods were pairwise compared. Calculated measures were compared using paired t tests. Results For all image quality characteristics, DLR images were scored significantly higher than MBIR images. Compared to Hybrid-IR, perceived noise and grey-white matter differentiation were better with DLR, while no difference was detected for other image quality characteristics. Noise magnitude was lower for DLR compared to Hybrid-IR and MBIR (5.6, 6.4 and 6.2, respectively) and SDNR higher (2.4, 1.9 and 2.0, respectively). Reconstruction times were 27 s, 44 s and 176 s for Hybrid-IR, DLR and MBIR respectively. Conclusions With a slight increase in reconstruction time, DLR results in lower noise and improved tissue differentiation compared to Hybrid-IR. Image quality of MBIR is significantly lower compared to DLR with much longer reconstruction times. Key Points • Deep learning reconstruction of cerebral non-contrast CT results in lower noise and improved tissue differentiation compared to hybrid-iterative reconstruction. • Deep learning reconstruction of cerebral non-contrast CT results in better image quality in all aspects evaluated compared to model-based iterative reconstruction. • Deep learning reconstruction only needs a slight increase in reconstruction time compared to hybrid-iterative reconstruction, while model-based iterative reconstruction requires considerably longer processing time.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1136
Author(s):  
David Augusto Ribeiro ◽  
Juan Casavílca Silva ◽  
Renata Lopes Rosa ◽  
Muhammad Saadi ◽  
Shahid Mumtaz ◽  
...  

Light field (LF) imaging has multi-view properties that help to create many applications that include auto-refocusing, depth estimation and 3D reconstruction of images, which are required particularly for intelligent transportation systems (ITSs). However, cameras can present a limited angular resolution, becoming a bottleneck in vision applications. Thus, there is a challenge to incorporate angular data due to disparities in the LF images. In recent years, different machine learning algorithms have been applied to both image processing and ITS research areas for different purposes. In this work, a Lightweight Deformable Deep Learning Framework is implemented, in which the problem of disparity into LF images is treated. To this end, an angular alignment module and a soft activation function into the Convolutional Neural Network (CNN) are implemented. For performance assessment, the proposed solution is compared with recent state-of-the-art methods using different LF datasets, each one with specific characteristics. Experimental results demonstrated that the proposed solution achieved a better performance than the other methods. The image quality results obtained outperform state-of-the-art LF image reconstruction methods. Furthermore, our model presents a lower computational complexity, decreasing the execution time.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Guangyi Yang ◽  
Xingyu Ding ◽  
Tian Huang ◽  
Kun Cheng ◽  
Weizheng Jin

Abstract Communications industry has remarkably changed with the development of fifth-generation cellular networks. Image, as an indispensable component of communication, has attracted wide attention. Thus, finding a suitable approach to assess image quality is important. Therefore, we propose a deep learning model for image quality assessment (IQA) based on explicit-implicit dual stream network. We use frequency domain features of kurtosis based on wavelet transform to represent explicit features and spatial features extracted by convolutional neural network (CNN) to represent implicit features. Thus, we constructed an explicit-implicit (EI) parallel deep learning model, namely, EI-IQA model. The EI-IQA model is based on the VGGNet that extracts the spatial domain features. On this basis, the number of network layers of VGGNet is reduced by adding the parallel wavelet kurtosis value frequency domain features. Thus, the training parameters and the sample requirements decline. We verified, by cross-validation of different databases, that the wavelet kurtosis feature fusion method based on deep learning has a more complete feature extraction effect and a better generalisation ability. Thus, the method can simulate the human visual perception system better, and subjective feelings become closer to the human eye. The source code about the proposed EI-IQA model is available on github https://github.com/jacob6/EI-IQA.


2021 ◽  
pp. 1-14
Author(s):  
Waqas Yousaf ◽  
Arif Umar ◽  
Syed Hamad Shirazi ◽  
Zakir Khan ◽  
Imran Razzak ◽  
...  

Automatic logo detection and recognition is significantly growing due to the increasing requirements of intelligent documents analysis and retrieval. The main problem to logo detection is intra-class variation, which is generated by the variation in image quality and degradation. The problem of misclassification also occurs while having tiny logo in large image with other objects. To address this problem, Patch-CNN is proposed for logo recognition which uses small patches of logos for training to solve the problem of misclassification. The classification is accomplished by dividing the logo images into small patches and threshold is applied to drop no logo area according to ground truth. The architectures of AlexNet and ResNet are also used for logo detection. We propose a segmentation free architecture for the logo detection and recognition. In literature, the concept of region proposal generation is used to solve logo detection, but these techniques suffer in case of tiny logos. Proposed CNN is especially designed for extracting the detailed features from logo patches. So far, the technique has attained accuracy equals to 0.9901 with acceptable training and testing loss on the dataset used in this work.


Sign in / Sign up

Export Citation Format

Share Document