scholarly journals Deep Learning Approach for Automatic Microaneurysms Detection

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 542
Author(s):  
Muhammad Mateen ◽  
Tauqeer Safdar Malik ◽  
Shaukat Hayat ◽  
Musab Hameed ◽  
Song Sun ◽  
...  

In diabetic retinopathy (DR), the early signs that may lead the eyesight towards complete vision loss are considered as microaneurysms (MAs). The shape of these MAs is almost circular, and they have a darkish color and are tiny in size, which means they may be missed by manual analysis of ophthalmologists. In this case, accurate early detection of microaneurysms is helpful to cure DR before non-reversible blindness. In the proposed method, early detection of MAs is performed using a hybrid feature embedding approach of pre-trained CNN models, named as VGG-19 and Inception-v3. The performance of the proposed approach was evaluated using publicly available datasets, namely “E-Ophtha” and “DIARETDB1”, and achieved 96% and 94% classification accuracy, respectively. Furthermore, the developed approach outperformed the state-of-the-art approaches in terms of sensitivity and specificity for microaneurysms detection.

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 274 ◽  
Author(s):  
Thippa Reddy Gadekallu ◽  
Neelu Khare ◽  
Sweta Bhattacharya ◽  
Saurabh Singh ◽  
Praveen Kumar Reddy Maddikunta ◽  
...  

Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. Although there are established screening methods - fluorescein angiography and optical coherence tomography for detection of the disease but in majority of the cases, the patients remain ignorant and fail to undertake such tests at an appropriate time. The early detection of the disease plays an extremely important role in preventing vision loss which is the consequence of diabetes mellitus remaining untreated among patients for a prolonged time period. Various machine learning and deep learning approaches have been implemented on diabetic retinopathy dataset for classification and prediction of the disease but majority of them have neglected the aspect of data pre-processing and dimensionality reduction, leading to biased results. The dataset used in the present study is a diabetes retinopathy dataset collected from the UCI machine learning repository. At its inceptions, the raw dataset is normalized using the Standardscalar technique and then Principal Component Analysis (PCA) is used to extract the most significant features in the dataset. Further, Firefly algorithm is implemented for dimensionality reduction. This reduced dataset is fed into a Deep Neural Network Model for classification. The results generated from the model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Paramita Ray ◽  
Amlan Chakrabarti

Social networks have changed the communication patterns significantly. Information available from different social networking sites can be well utilized for the analysis of users opinion. Hence, the organizations would benefit through the development of a platform, which can analyze public sentiments in the social media about their products and services to provide a value addition in their business process. Over the last few years, deep learning is very popular in the areas of image classification, speech recognition, etc. However, research on the use of deep learning method in sentiment analysis is limited. It has been observed that in some cases the existing machine learning methods for sentiment analysis fail to extract some implicit aspects and might not be very useful. Therefore, we propose a deep learning approach for aspect extraction from text and analysis of users sentiment corresponding to the aspect. A seven layer deep convolutional neural network (CNN) is used to tag each aspect in the opinionated sentences. We have combined deep learning approach with a set of rule-based approach to improve the performance of aspect extraction method as well as sentiment scoring method. We have also tried to improve the existing rule-based approach of aspect extraction by aspect categorization with a predefined set of aspect categories using clustering method and compared our proposed method with some of the state-of-the-art methods. It has been observed that the overall accuracy of our proposed method is 0.87 while that of the other state-of-the-art methods like modified rule-based method and CNN are 0.75 and 0.80 respectively. The overall accuracy of our proposed method shows an increment of 7–12% from that of the state-of-the-art methods.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 80 ◽  
Author(s):  
Rania M. Ghoniem

Current research on computer-aided diagnosis (CAD) of liver cancer is based on traditional feature engineering methods, which have several drawbacks including redundant features and high computational cost. Recent deep learning models overcome these problems by implicitly capturing intricate structures from large-scale medical image data. However, they are still affected by network hyperparameters and topology. Hence, the state of the art in this area can be further optimized by integrating bio-inspired concepts into deep learning models. This work proposes a novel bio-inspired deep learning approach for optimizing predictive results of liver cancer. This approach contributes to the literature in two ways. Firstly, a novel hybrid segmentation algorithm is proposed to extract liver lesions from computed tomography (CT) images using SegNet network, UNet network, and artificial bee colony optimization (ABC), namely, SegNet-UNet-ABC. This algorithm uses the SegNet for separating liver from the abdominal CT scan, then the UNet is used to extract lesions from the liver. In parallel, the ABC algorithm is hybridized with each network to tune its hyperparameters, as they highly affect the segmentation performance. Secondly, a hybrid algorithm of the LeNet-5 model and ABC algorithm, namely, LeNet-5/ABC, is proposed as feature extractor and classifier of liver lesions. The LeNet-5/ABC algorithm uses the ABC to select the optimal topology for constructing the LeNet-5 network, as network structure affects learning time and classification accuracy. For assessing performance of the two proposed algorithms, comparisons have been made to the state-of-the-art algorithms on liver lesion segmentation and classification. The results reveal that the SegNet-UNet-ABC is superior to other compared algorithms regarding Jaccard index, Dice index, correlation coefficient, and convergence time. Moreover, the LeNet-5/ABC algorithm outperforms other algorithms regarding specificity, F1-score, accuracy, and computational time.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 23
Author(s):  
Radifa Paradisa ◽  
Alhadi Bustamam ◽  
Wibowo Mangunwardoyo ◽  
Andi Victor ◽  
Anggun Yudantha ◽  
...  

Fundus image is an image that captures the back of the eye (retina), which plays an important role in the detection of a disease, including diabetic retinopathy (DR). It is the most common complication in diabetics that remains an important cause of visual impairment, especially in the young and economically active age group. In patients with DR, early diagnosis can effectively help prevent the risk of vision loss. DR screening was performed by an ophthalmologist by analysing the lesions on the fundus image. However, the increasing prevalence of DR is not proportional to the availability of ophthalmologists who can read fundus images. It can lead to delayed prevention and management of DR. Therefore, there is a need for an automated diagnostic system as it can help ophthalmologists increase the efficiency of the diagnostic process. This paper provides a deep learning approach with the concatenate model for fundus image classification with three classes: no DR, non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR). The model architecture used is DenseNet121 and Inception-ResNetV2. The feature extraction results from the two models are combined and classified using the multilayer perceptron (MLP) method. The method that we propose gives an improvement compared to a single model with the results of accuracy, and average precision and recall of 91% and 90% for the F1-score, respectively. This experiment demonstrates that our proposed deep-learning approach is effective for the automatic DR classification using fundus photo data.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1962
Author(s):  
Enrico Buratto ◽  
Adriano Simonetto ◽  
Gianluca Agresti ◽  
Henrik Schäfer ◽  
Pietro Zanuttigh

In this work, we propose a novel approach for correcting multi-path interference (MPI) in Time-of-Flight (ToF) cameras by estimating the direct and global components of the incoming light. MPI is an error source linked to the multiple reflections of light inside a scene; each sensor pixel receives information coming from different light paths which generally leads to an overestimation of the depth. We introduce a novel deep learning approach, which estimates the structure of the time-dependent scene impulse response and from it recovers a depth image with a reduced amount of MPI. The model consists of two main blocks: a predictive model that learns a compact encoded representation of the backscattering vector from the noisy input data and a fixed backscattering model which translates the encoded representation into the high dimensional light response. Experimental results on real data show the effectiveness of the proposed approach, which reaches state-of-the-art performances.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


2021 ◽  
Vol 54 (1) ◽  
pp. 1-39
Author(s):  
Zara Nasar ◽  
Syed Waqar Jaffry ◽  
Muhammad Kamran Malik

With the advent of Web 2.0, there exist many online platforms that result in massive textual-data production. With ever-increasing textual data at hand, it is of immense importance to extract information nuggets from this data. One approach towards effective harnessing of this unstructured textual data could be its transformation into structured text. Hence, this study aims to present an overview of approaches that can be applied to extract key insights from textual data in a structured way. For this, Named Entity Recognition and Relation Extraction are being majorly addressed in this review study. The former deals with identification of named entities, and the latter deals with problem of extracting relation between set of entities. This study covers early approaches as well as the developments made up till now using machine learning models. Survey findings conclude that deep-learning-based hybrid and joint models are currently governing the state-of-the-art. It is also observed that annotated benchmark datasets for various textual-data generators such as Twitter and other social forums are not available. This scarcity of dataset has resulted into relatively less progress in these domains. Additionally, the majority of the state-of-the-art techniques are offline and computationally expensive. Last, with increasing focus on deep-learning frameworks, there is need to understand and explain the under-going processes in deep architectures.


Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


Sign in / Sign up

Export Citation Format

Share Document