scholarly journals Early Detection of Diabetic Retinopathy by Using Deep Learning Neural Network

2018 ◽  
Vol 7 (4.11) ◽  
pp. 198 ◽  
Author(s):  
Mohamad Hazim Johari ◽  
Hasliza Abu Hassan ◽  
Ahmad Ihsan Mohd Yassin ◽  
Nooritawati Md Tahir ◽  
Azlee Zabidi ◽  
...  

This project presents a method to detect diabetic retinopathy on the fundus images by using deep learning neural network. Alexnet Convolution Neural Network (CNN) has been used in the project to ease the process of neural learning. The data set used were retrieved from MESSIDOR database and it contains 1200 pieces of fundus images. The images were filtered based on the project needed.  There were 580 pieces of images types .tif has been used after filtered and those pictures were divided into 2, which is Exudates images and Normal images. On the training and testing session, the 580 mixed of exudates and normal fundus images were divided into 2 sets which is training set and testing set. The result of the training and testing set were merged into a confusion matrix. The result for this project shows that the accuracy of the CNN for training and testing set was 99.3% and 88.3% respectively.   

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>


Author(s):  
Aavani B

Abstract: Diabetic retinopathy is the leading cause of blindness in diabetic patients. Screening of diabetic retinopathy using fundus image is the most effective way. As the time increases this DR leads to permanent loss of vision. At present, Diabetic retinopathy is still being treated by hand by an ophthalmologist which is a time-consuming process. Computer aided and fully automatic diagnosis of DR plays an important role in now a day. Data-set containing a collection of fundus images of different severity scale is used to analyze the fundus image of DR patients. Here the deep neural network model is trained by using this fundus image and five-degree classification task is performed. We were able to produce an sensitivity of 90%. Keywords: Confusion matrix, Deep convolutional Neural Network, Diabetic Retinopathy, Fundus image, OCT


Cancers ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 352
Author(s):  
Anyou Wang ◽  
Rong Hai ◽  
Paul J. Rider ◽  
Qianchuan He

Detecting cancers at early stages can dramatically reduce mortality rates. Therefore, practical cancer screening at the population level is needed. To develop a comprehensive detection system to classify multiple cancer types. We integrated an artificial intelligence deep learning neural network and noncoding RNA biomarkers selected from massive data. Our system can accurately detect cancer vs. healthy objects with 96.3% of AUC of ROC (Area Under Curve of a Receiver Operating Characteristic curve), and it surprisingly reaches 78.77% of AUC when validated by real-world raw data from a completely independent data set. Even validating with raw exosome data from blood, our system can reach 72% of AUC. Moreover, our system significantly outperforms conventional machine learning models, such as random forest. Intriguingly, with no more than six biomarkers, our approach can easily discriminate any individual cancer type vs. normal with 99% to 100% AUC. Furthermore, a comprehensive marker panel can simultaneously multi-classify common cancers with a stable 82.15% accuracy rate for heterogeneous cancerous tissues and conditions.: This detection system provides a promising practical framework for automatic cancer screening at population level. Key points: (1) We developed a practical cancer screening system, which is simple, accurate, affordable, and easy to operate. (2) Our system binarily classify cancers vs. normal with >96% AUC. (3) In total, 26 individual cancer types can be easily detected by our system with 99 to 100% AUC. (4) The system can detect multiple cancer types simultaneously with >82% accuracy.


Diabetic Retinopathy (DR) is the leading cause of disease to blindness of people globally. The retinal screening examinations of diabetic patients is needed to prevent the disease. There are many untreated and undiagnosed cases present in especially in India. DR requires smart technique to detect it. In this paper, we proposed a deep learning based architecture for detecting the DR. The experiments are done on the DR Dataset available in UCI machine Learning Repository. The results obtained from the experiments are satisfactory.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
P. Shane Crawford ◽  
Mohammad A. Al-Zarrad ◽  
Andrew J. Graettinger ◽  
Alexander M. Hainen ◽  
Edward Back ◽  
...  

Infrastructure vulnerability has drawn significant attention in recent years, partly because of the occurrence of low-probability and high-consequence disruptive events such as 2017 hurricanes Harvey, Irma, and Maria, 2011 Tuscaloosa and Joplin tornadoes, and 2015 Gorkha, Nepal, and 2017 Central Mexico earthquakes. Civil infrastructure systems support social welfare, thus viability and sustained operation is critical. A variety of frameworks, models, and tools exist for advancing infrastructure vulnerability research. Nevertheless, providing accurate vulnerability measurement remains challenging. This paper presents a state-of-the-art data collection and information extraction methodology to document infrastructure at high granularity to assess preevent vulnerability and postevent damage in the face of disasters. The methods establish a baseline of preevent infrastructure functionality that can be used to measure impacts and temporal recovery following a disaster. The Extreme Events Web Viewer (EEWV) presented as part of the methodology is a GIS-based web repository storing spatial and temporal data describing communities before and after disasters and facilitating data analysis techniques. This web platform can store multiple geolocated data formats including photographs and 360° videos. A tool for automated extraction of photography from 360° video data at locations of interest specified in the EEWV was created to streamline data utility. The extracted imagery provides a manageable data set to efficiently document characteristics of the built and natural environment. The methodology was tested to locate buildings vulnerable to flood and storm surge on Dauphin Island, Alabama. Approximately 1,950 buildings were passively documented with vehicle-mounted 360° video. Extracted building images were used to train a deep learning neural network to predict whether a building was elevated or nonelevated. The model was validated, and methods for iterative neural network training are described. The methodology, from rapidly collecting large passive datasets, storing the data in an open repository, extracting manageable datasets, and obtaining information from data through deep learning, will facilitate vulnerability and postdisaster analyses as well as longitudinal recovery measurement.


Author(s):  
Yasir Eltigani Ali Mustaf ◽  
◽  
Bashir Hassan Ismail ◽  

Diagnosis of diabetic retinopathy (DR) via images of colour fundus requires experienced clinicians to determine the presence and importance of a large number of small characteristics. This work proposes and named Adapted Stacked Auto Encoder (ASAE-DNN) a novel deep learning framework for diabetic retinopathy (DR), three hidden layers have been used to extract features and classify them then use a Softmax classification. The models proposed are checked on Messidor's data set, including 800 training images and 150 test images. Exactness, accuracy, time, recall and calculation are assessed for the outcomes of the proposed models. The results of these studies show that the model ASAE-DNN was 97% accurate.


2021 ◽  
Author(s):  
Abdelali ELMOUFIDI ◽  
Hind Amoun

Abstract Classification of the stages of diabetic retinopathy (DR) is considered a key step in the assessment and management of diabetic retinopathy. Due to the damage caused by high blood sugar to the retinal blood vessels, different microscopic structures can be occupied in the retinal area, such as micro-aneurysms, hard exudate and neovascularization. The convolutional neural network (CNN) based on deep learning has become a promising method for the analysis of biomedical images. In this work, representative images of diabetic retinopathy (DR) are divided into five categories according to the professional knowledge of ophthalmologists. This article focuses on the use of convolutional neural networks to classify background images of DR according to disease severity and on the application of pooling, Softmax Activation to achieve greater accuracy. The aptos2019-blindness-detection database makes it possible to verify the performance of the proposed algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5327
Author(s):  
Michael Chi Seng Tang ◽  
Soo Siang Teoh ◽  
Haidi Ibrahim ◽  
Zunaina Embong

Proliferative Diabetic Retinopathy (PDR) is a severe retinal disease that threatens diabetic patients. It is characterized by neovascularization in the retina and the optic disk. PDR clinical features contain highly intense retinal neovascularization and fibrous spreads, leading to visual distortion if not controlled. Different image processing techniques have been proposed to detect and diagnose neovascularization from fundus images. Recently, deep learning methods are getting popular in neovascularization detection due to artificial intelligence advancement in biomedical image processing. This paper presents a semantic segmentation convolutional neural network architecture for neovascularization detection. First, image pre-processing steps were applied to enhance the fundus images. Then, the images were divided into small patches, forming a training set, a validation set, and a testing set. A semantic segmentation convolutional neural network was designed and trained to detect the neovascularization regions on the images. Finally, the network was tested using the testing set for performance evaluation. The proposed model is entirely automated in detecting and localizing neovascularization lesions, which is not possible with previously published methods. Evaluation results showed that the model could achieve accuracy, sensitivity, specificity, precision, Jaccard similarity, and Dice similarity of 0.9948, 0.8772, 0.9976, 0.8696, 0.7643, and 0.8466, respectively. We demonstrated that this model could outperform other convolutional neural network models in neovascularization detection.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sign in / Sign up

Export Citation Format

Share Document