Estimating subjective evaluation of low-contrast resolution using convolutional neural networks

Author(s):  
Yujiro Doi ◽  
Atsushi Teramoto ◽  
Ayumi Yamada ◽  
Masanao Kobayashi ◽  
Kuniaki Saito ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1265 ◽  
Author(s):  
Haya Alaskar ◽  
Abir Hussain ◽  
Nourah Al-Aseem ◽  
Panos Liatsis ◽  
Dhiya Al-Jumeily

Detection of abnormalities in wireless capsule endoscopy (WCE) images is a challenging task. Typically, these images suffer from low contrast, complex background, variations in lesion shape and color, which affect the accuracy of their segmentation and subsequent classification. This research proposes an automated system for detection and classification of ulcers in WCE images, based on state-of-the-art deep learning networks. Deep learning techniques, and in particular, convolutional neural networks (CNNs), have recently become popular in the analysis and recognition of medical images. The medical image datasets used in this study were obtained from WCE video frames. In this work, two milestone CNN architectures, namely the AlexNet and the GoogLeNet are extensively evaluated in object classification into ulcer or non-ulcer. Furthermore, we examine and analyze the images identified as containing ulcer objects to evaluate the efficiency of the utilized CNNs. Extensive experiments show that CNNs deliver superior performance, surpassing traditional machine learning methods by large margins, which supports their effectiveness as automated diagnosis tools.


2021 ◽  
Author(s):  
Wei Jiang ◽  
Kai Zhang ◽  
Wu Zhao ◽  
Xin Guo

Abstract The emotional needs for products have increased significantly with the recent improvements in living standards. Attribute evaluation forms the core of Kansei engineering in emotion-oriented products, and is practically quite subjective in nature. Essentially, attribute evaluation is a fuzzy classification task, whose quantitative results change slightly with statistical time and statistical objects, making it difficult to accurately describe using standard mathematical models. In this paper, we propose a novel deep-learning-assisted fuzzy attribute-evaluation (DLFAE) method, which could generate quantitative evaluation results. In comparison to existing methods, the proposed method combines subjective evaluation with convolutional neural networks, which facilitates the generation of quantitative evaluation results. Additionally, this strategy has better transferability for different situations, increasing its versatility and applicability. This, in turn, reduces the computational burden of evaluation and improves operational efficiency.


2019 ◽  
Author(s):  
Md. Mohaimenul Islam ◽  
Tahmina Nasrin Poly ◽  
Yu-Chuan (Jack) Li

AbstractComputer-aided detection (CAD) system is a realistic option for physicians to screen fundus images. Automated segmentation of retinal vessel is in fundus important step to identify the retinal disease region. However, identification of the retinal disease region accurately is still challenging due to the varied distribution of blood vessel on noisy and low contrast fundus images. Healthcare system has been changing significantly with the emergence of machine learning (ML), deep learning (DL) and artificial intelligence (AI) in recent year. Retinal vessel detection is one such area of application of deep learning, for improving the accuracy of detection and segmentation and the quality of patient care. Recently, the convolutional neural networks (CNN) have been applied to the detection of the retinal vessel from fundus images and have demonstrated promising results. The range of accuracy of the CNN model was 0.91-0.95 and the area under the receiver operating curve was 0.09-0.98. Therefore, CNN may play a crucial role in determining the therapeutic methods and detecting the retinal vessel accurately in an individual manner. In this survey, we described the use of CNN in fundus imaging, especially focused on CNN technique, clinical application for retinal vessel detection and future prospective.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1343 ◽  
Author(s):  
Akmaljon Palvanov ◽  
Young Cho

Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relation to visibility estimation under various foggy weather conditions. We propose VisNet, which is a new approach based on deep integrated convolutional neural networks for the estimation of visibility distances from camera imagery. The implemented network uses three streams of deep integrated convolutional neural networks, which are connected in parallel. In addition, we have collected the largest dataset with three million outdoor images and exact visibility values for this study. To evaluate the model’s performance fairly and objectively, the model is trained on three image datasets with different visibility ranges, each with a different number of classes. Moreover, our proposed model, VisNet, evaluated under dissimilar fog density scenarios, uses a diverse set of images. Prior to feeding the network, each input image is filtered in the frequency domain to remove low-level features, and a spectral filter is applied to each input for the extraction of low-contrast regions. Compared to the previous methods, our approach achieves the highest performance in terms of classification based on three different datasets. Furthermore, our VisNet considerably outperforms not only the classical methods, but also state-of-the-art models of visibility estimation.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document