scholarly journals Comparative Analysis of Deep Learning Methods for Detection of Keratoconus

2021 ◽  
Vol 23 (06) ◽  
pp. 1546-1553
Author(s):  
Impana N ◽  
◽  
K J Bhoomika ◽  
Suraksha S S ◽  
Karan Sawhney ◽  
...  

Keratoconus eye disease is not an inflammatory corneal disease that is caused by progress in thinning of the cornea, scarring, and deformation in the shape of the cornea. In India, there is a significant increase in the number of cases of keratoconus, and several research centers have been paying attention to this disease in recent years. In this situation, there is an immediate need for tools that simplify both diagnosis and treatment[1]. The algorithm developed can decide whether the eye is a normal eye or keratoconus eye with stages. The K-net model analyzes the pentagram images of the eye using a convolutional neural network(CNN) a deep learning model and pre-trained ResNet-50 and InceptionV3 pre-trained models and does the comparative analysis of the accuracies of these models. The results show that the Keratoconus Detection algorithm leads to a good job, with a 93.75 percent accuracy on the data test collection. Keratoconus Detection model is a program that can help ophthalmologists test their patients faster, therefore reducing diagnostic errors and facilitating treatment.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dapeng Lang ◽  
Deyun Chen ◽  
Ran Shi ◽  
Yongjun He

Deep learning has been widely used in the field of image classification and image recognition and achieved positive practical results. However, in recent years, a number of studies have found that the accuracy of deep learning model based on classification greatly drops when making only subtle changes to the original examples, thus realizing the attack on the deep learning model. The main methods are as follows: adjust the pixels of attack examples invisible to human eyes and induce deep learning model to make the wrong classification; by adding an adversarial patch on the detection target, guide and deceive the classification model to make it misclassification. Therefore, these methods have strong randomness and are of very limited use in practical application. Different from the previous perturbation to traffic signs, our paper proposes a method that is able to successfully hide and misclassify vehicles in complex contexts. This method takes into account the complex real scenarios and can perturb with the pictures taken by a camera and mobile phone so that the detector based on deep learning model cannot detect the vehicle or misclassification. In order to improve the robustness, the position and size of the adversarial patch are adjusted according to different detection models by introducing the attachment mechanism. Through the test of different detectors, the patch generated in the single target detection algorithm can also attack other detectors and do well in transferability. Based on the experimental part of this paper, the proposed algorithm is able to significantly lower the accuracy of the detector. Affected by the real world, such as distance, light, angles, resolution, etc., the false classification of the target is realized by reducing the confidence level and background of the target, which greatly perturbs the detection results of the target detector. In COCO Dataset 2017, it reveals that the success rate of this algorithm reaches 88.7%.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shu-Hui Wang ◽  
Xin-Jun Han ◽  
Jing Du ◽  
Zhen-Chang Wang ◽  
Chunwang Yuan ◽  
...  

Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.


2020 ◽  
Vol 12 (12) ◽  
pp. 5074
Author(s):  
Jiyoung Woo ◽  
Jaeseok Yun

Spam posts in web forum discussions cause user inconvenience and lower the value of the web forum as an open source of user opinion. In this regard, as the importance of a web post is evaluated in terms of the number of involved authors, noise distorts the analysis results by adding unnecessary data to the opinion analysis. Here, in this work, an automatic detection model for spam posts in web forums using both conventional machine learning and deep learning is proposed. To automatically differentiate between normal posts and spam, evaluators were asked to recognize spam posts in advance. To construct the machine learning-based model, text features from posted content using text mining techniques from the perspective of linguistics were extracted, and supervised learning was performed to distinguish content noise from normal posts. For the deep learning model, raw text including and excluding special characters was utilized. A comparison analysis on deep neural networks using the two different recurrent neural network (RNN) models of the simple RNN and long short-term memory (LSTM) network was also performed. Furthermore, the proposed model was applied to two web forums. The experimental results indicate that the deep learning model affords significant improvements over the accuracy of conventional machine learning associated with text features. The accuracy of the proposed model using LSTM reaches 98.56%, and the precision and recall of the noise class reach 99% and 99.53%, respectively.


2020 ◽  
Vol 10 (21) ◽  
pp. 7751
Author(s):  
Seong-Jae Hong ◽  
Won-Kyung Baek ◽  
Hyung-Sup Jung

Synthetic aperture radar (SAR) images have been used in many studies for ship detection because they can be captured without being affected by time and weather. In recent years, the development of deep learning techniques has facilitated studies on ship detection in SAR images using deep learning techniques. However, because the noise from SAR images can negatively affect the learning of the deep learning model, it is necessary to reduce the noise through preprocessing. In this study, deep learning vessel detection was performed using preprocessed SAR images, and the effects of the preprocessing of the images on deep learning vessel detection were compared and analyzed. Through the preprocessing of SAR images, (1) intensity images, (2) decibel images, and (3) intensity difference and texture images were generated. The M2Det object detection model was used for the deep learning process and preprocessed SAR images. After the object detection model was trained, ship detection was performed using test images. The test results are presented in terms of precision, recall, and average precision (AP), which were 93.18%, 91.11%, and 89.78% for the intensity images, respectively, 94.16%, 94.16%, and 92.34% for the decibel images, respectively, and 97.40%, 94.94%, and 95.55% for the intensity difference and texture images, respectively. From the results, it can be found that the preprocessing of the SAR images can facilitate the deep learning process and improve the ship detection performance. The results of this study are expected to contribute to the development of deep learning-based ship detection techniques in SAR images in the future.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yiran Feng ◽  
Xueheng Tao ◽  
Eung-Joo Lee

In view of the current absence of any deep learning algorithm for shellfish identification in real contexts, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multiobject recognition and localization through a second-order detection network and replaces the original feature extraction module with DenseNet, which can fuse multilevel feature information, increase network depth, and avoid the disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects and enhancing the network detection accuracy under multiple objects. By constructing a real contexts shellfish dataset and conducting experimental tests on a vision recognition seafood sorting robot production line, we were able to detect the features of shellfish in different scenarios, and the detection accuracy was improved by nearly 4% compared to the original detection model, achieving a better detection accuracy. This provides favorable technical support for future quality sorting of seafood using the improved Faster R-CNN-based approach.


2019 ◽  
Author(s):  
Jacob M. Graving ◽  
Daniel Chae ◽  
Hemal Naik ◽  
Liang Li ◽  
Benjamin Koger ◽  
...  

AbstractQuantitative behavioral measurements are important for answering questions across scientific disciplines—from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal’s body parts directly from images or videos. However, currently-available animal pose estimation methods have limitations in speed and robustness. Here we introduce a new easy-to-use software toolkit,DeepPoseKit, that addresses these problems using an eZcient multi-scale deep-learning model, calledStacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2× with no loss in accuracy compared to currently-available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings—including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.


2021 ◽  
Author(s):  
Amandip Sangha ◽  
Mohammad Rizvi

AbstractImportanceState-of-the art performance is achieved with a deep learning object detection model for acne detection. There is little current research on object detection in dermatology and acne in particular. As such, this work is early in this field and achieves state of the art performance.ObjectiveTrain an object detection model on a publicly available data set of acne photos.Design, Setting, and ParticipantsA deep learning model is trained with cross validation on a data set of facial acne photos.Main Outcomes and MeasuresObject detection models for detecting acne for single-class (acne) and multi-class (four severity levels). We train and evaluate the models using standard metrics such as mean average precision (mAP). Then we manually evaluate the model predictions on the test set, and calculate accuracy in terms of precision, recall, F1, true and false positive and negative detections.ResultsWe achieve state-of-the art mean average precision [email protected] value of 37.97 for the single class acne detection task, and 26.50 for the 4-class acne detection task. Moreover, our manual evaluation shows that the single class detection model performs well on the validation set, achieving true positive 93.59 %, precision 96.45 % and recall 94.73 %.Conclusions and RelevanceWe are able to train a high-accuracy acne detection model using only a small publicly available data set of facial acne. Transfer learning on the pre-trained deep learning model yields good accuracy and high degree of transferability to patient submitted photographs. We also note that the training of standard architecture object detection models has given significantly better accuracy than more intricate and bespoke neural network architectures in the existing research literature.Key PointsQuestionCan deep learning-based acne detection models trained on a small data set of publicly available photos of patients with acne achieve high prediction accuracy?FindingsWe find that it is possible to train a reasonably good object detection model on a small, annotated data set of acne photos using standard deep learning architectures.MeaningDeep learning-based object detection models for acne detection can be a useful decision support tools for dermatologists treating acne patients in a digital clinical practice. It can prove a particularly useful tool for monitoring the time evolution of the acne disease state over prolonged time during follow-ups, as the model predictions give a quantifiable and comparable output for photographs over time. This is particularly helpful in teledermatological consultations, as a prediction model can be integrated in the patient-doctor remote communication.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 28
Author(s):  
Joan Y. Q. Li ◽  
Stephanie Duce ◽  
Karen E. Joyce ◽  
Wei Xiang

Sea cucumbers (Holothuroidea or holothurians) are a valuable fishery and are also crucial nutrient recyclers, bioturbation agents, and hosts for many biotic associates. Their ecological impacts could be substantial given their high abundance in some reef locations and thus monitoring their populations and spatial distribution is of research interest. Traditional in situ surveys are laborious and only cover small areas but drones offer an opportunity to scale observations more broadly, especially if the holothurians can be automatically detected in drone imagery using deep learning algorithms. We adapted the object detection algorithm YOLOv3 to detect holothurians from drone imagery at Hideaway Bay, Queensland, Australia. We successfully detected 11,462 of 12,956 individuals over 2.7ha with an average density of 0.5 individual/m2. We tested a range of hyperparameters to determine the optimal detector performance and achieved 0.855 mAP, 0.82 precision, 0.83 recall, and 0.82 F1 score. We found as few as ten labelled drone images was sufficient to train an acceptable detection model (0.799 mAP). Our results illustrate the potential of using small, affordable drones with direct implementation of open-source object detection models to survey holothurians and other shallow water sessile species.


Plants ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 2714
Author(s):  
Syada Nizer Sultana ◽  
Halim Park ◽  
Sung Hoon Choi ◽  
Hyun Jo ◽  
Jong Tae Song ◽  
...  

Stomatal observation and automatic stomatal detection are useful analyses of stomata for taxonomic, biological, physiological, and eco-physiological studies. We present a new clearing method for improved microscopic imaging of stomata in soybean followed by automated stomatal detection by deep learning. We tested eight clearing agent formulations based upon different ethanol and sodium hypochlorite (NaOCl) concentrations in order to improve the transparency in leaves. An optimal formulation—a 1:1 (v/v) mixture of 95% ethanol and NaOCl (6–14%)—produced better quality images of soybean stomata. Additionally, we evaluated fixatives and dehydrating agents and selected absolute ethanol for both fixation and dehydration. This is a good substitute for formaldehyde, which is more toxic to handle. Using imaging data from this clearing method, we developed an automatic stomatal detector using deep learning and improved a deep-learning algorithm that automatically analyzes stomata through an object detection model using YOLO. The YOLO deep-learning model successfully recognized stomata with high mAP (~0.99). A web-based interface is provided to apply the model of stomatal detection for any soybean data that makes use of the new clearing protocol.


2021 ◽  
Vol 11 (17) ◽  
pp. 8210
Author(s):  
Chaeyoung Lee ◽  
Hyomin Kim ◽  
Sejong Oh ◽  
Illchul Doo

This research produced a model that detects abnormal phenomena on the road, based on deep learning, and proposes a service that can prevent accidents because of other cars and traffic congestion. After extracting accident images based on traffic accident video data by using FFmpeg for model production, car collision types are classified, and only the head-on collision types are processed by using the deep learning object-detection algorithm YOLO (You Only Look Once). Using the car accident detection model that we built and the provided road obstacle-detection model, we programmed, for when the model detects abnormalities on the road, warning notification and photos that captures the accidents or obstacles, which are then transferred to the application. The proposed service was verified through application notification simulations and virtual experiments using CCTVs in Daegu, Busan, and Gwangju. By providing services, the goal is to improve traffic safety and achieve the development of a self-driving vehicle sector. As a future research direction, it is suggested that an efficient CCTV control system be introduced for the transportation environment.


Sign in / Sign up

Export Citation Format

Share Document