Deep Learning Solution for Diabetic Retinopathy Diagnosis based on Convolutional Neural Networks and Image Processing Algorithms

Author(s):  
Tomozei Cosmin Ion ◽  
Nechita Elena ◽  
Dorian Lazar
2018 ◽  
Vol 7 (2.7) ◽  
pp. 614 ◽  
Author(s):  
M Manoj krishna ◽  
M Neelima ◽  
M Harshali ◽  
M Venu Gopala Rao

The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.  


2019 ◽  
Vol 8 (3) ◽  
pp. 6873-6880

Palm leaf manuscripts has been one of the ancient writing methods but the palm leaf manuscripts content requires to be inscribed in a new set of leaves. This study has provided a solution to save the contents in palm leaf manuscripts by recognizing the handwritten Tamil characters in manuscripts and storing them digitally. Character recognition is one of the most essential fields of pattern recognition and image processing. Generally Optical character recognition is the method of e-translation of typewritten text or handwritten images into machine editable text. The handwritten Tamil character recognition has been one of the challenging and active areas of research in the field of pattern recognition and image processing. In this study a trial was made to identify Tamil handwritten characters without extraction of feature using convolutional neural networks. This study uses convolutional neural networks for recognizing and classifying the Tamil palm leaf manuscripts of characters from separated character images. The convolutional neural network is a deep learning approach for which it does not need to retrieve features and also a rapid approach for character recognition. In the proposed system every character is expanded to needed pixels. The expanded characters have predetermined pixels and these pixels are considered as characteristics for neural network training. The trained network is employed for recognition and classification. Convolutional Network Model development contains convolution layer, Relu layer, pooling layer, fully connected layer. The ancient Tamil character dataset of 60 varying class has been created. The outputs reveal that the proposed approach generates better rates of recognition than that of schemes based on feature extraction for handwritten character recognition. The accuracy of the proposed approach has been identified as 97% which shows that the proposed approach is effective in terms of recognition of ancient characters.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Author(s):  
João Sauer ◽  
Marco Boaretto ◽  
Edson Gnatkovski Gruska ◽  
arthur canciglieri ◽  
Gabriel Herman Bernardim Andrade ◽  
...  

2020 ◽  
Author(s):  
Shrey Srivast ◽  
Amit Vishvas Divekar ◽  
Chandu Anilkumar ◽  
Ishika Naik ◽  
Ved Kulkarni ◽  
...  

Abstract As humans, we do not have to strain ourselves when we interpret our surroundings through our visual senses. From the moment we begin to observe, we unconsciously train ourselves with the same set of images. Hence, distinguishing entities is not a difficult task for us. On the contrary, computer views all kinds of visual media as an array of numerical values. Due to this contrast in approach, they require image processing algorithms to examine the contents of images. This project presents a comparative analysis of 3 major image processing algorithms: SSD, Faster R-CNN, and YOLO. In this analysis, we have chosen the COCO dataset. With the help of the COCO dataset, we have evaluated the performance and accuracy of the three algorithms and analysed their strengths and weaknesses. Using the results obtained from our implementations, we determine the differences between how each algorithm runs and suitable applications for each. The parameters for evaluation are accuracy, precision, F1 score.


2021 ◽  
Author(s):  
Bianka Tallita Passos ◽  
Moira Cristina Cubas Fatiga Tillmann ◽  
Anita Maria da Rocha Fernandes

Medical practice in general, and dentistry in particular, generatesdata sources, such as high-resolution medical images and electronicmedical records. Digital image processing algorithms takeadvantage of the datasets, enabling the development of dental applicationssuch as tooth, caries, crown, prosthetic, dental implant, andendodontic treatment detection, as well as image classification. Thegoal of image classification is to comprehend it as a whole and classifythe image by assigning it to a specific label. This work presentsthe proposal of a tool that helps the dental prosthesis specialist toexchange information with the laboratory. The proposed solutionuses deep learning to classify image, in order to improve the understandingof the structure required for modeling the prosthesis. Theimage database used has a total of 1215 images. Of these, 60 wereseparated for testing. The prototype achieved 98.33% accuracy.


Sign in / Sign up

Export Citation Format

Share Document