Glaucoma Assessment from Fundus Images with Fundus to OCT Feature Space Mapping

2022 ◽  
Vol 3 (1) ◽  
pp. 1-15
Author(s):  
Divya Jyothi Gaddipati ◽  
Jayanthi Sivaswamy

Early detection and treatment of glaucoma is of interest as it is a chronic eye disease leading to an irreversible loss of vision. Existing automated systems rely largely on fundus images for assessment of glaucoma due to their fast acquisition and cost-effectiveness. Optical Coherence Tomographic ( OCT ) images provide vital and unambiguous information about nerve fiber loss and optic cup morphology, which are essential for disease assessment. However, the high cost of OCT is a deterrent for deployment in screening at large scale. In this article, we present a novel CAD solution wherein both OCT and fundus modality images are leveraged to learn a model that can perform a mapping of fundus to OCT feature space. We show how this model can be subsequently used to detect glaucoma given an image from only one modality (fundus). The proposed model has been validated extensively on four public andtwo private datasets. It attained an AUC/Sensitivity value of 0.9429/0.9044 on a diverse set of 568 images, which is superior to the figures obtained by a model that is trained only on fundus features. Cross-validation was also done on nearly 1,600 images drawn from a private (OD-centric) and a public (macula-centric) dataset and the proposed model was found to outperform the state-of-the-art method by 8% (public) to 18% (private). Thus, we conclude that fundus to OCT feature space mapping is an attractive option for glaucoma detection.

2021 ◽  
Author(s):  
M. Madhumalini ◽  
T. Meera Devi

Abstract Glaucoma is a retinal disease that damages the eye's optic nerve, frequently causing an irreversible loss of vision. However, the accurate diagnosis of this disease is difficult but early-stage diagnosis may cure this retinal disease. The objective of this research is to diagnose glaucoma disease in the top of the eye's optical nerve. The proposed approach detects glaucoma via four major steps namely Data enhancement phase, segmentation phase, feature extraction phase, and classification phase by the fractional gravitational search-based hybrid deep neural network (FGSA-HDNN) classifier. The proposed classifier is used for the exact classification of glaucoma infected images and normal images. Here, the proposed approach utilizes the statistical, textural, and vessel features from the segmented output. Also, the proposed FGSO algorithm is used for testing the deep neural network. From the experimental results, it is observed that the proposed glaucoma detection has obtained a sensitivity of 99.64%, a specificity of 97.84%, and an accuracy of 98.75% that outperforms other state-of-art methods.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5710
Author(s):  
Yukun Wu ◽  
Wei William Lee ◽  
Xuan Gong ◽  
Hui Wang

Owing to the constraints of time and space complexity, network intrusion detection systems (NIDSs) based on support vector machines (SVMs) face the “curse of dimensionality” in a large-scale, high-dimensional feature space. This study proposes a joint training model that combines a stacked autoencoder (SAE) with an SVM and the kernel approximation technique. The training model uses the SAE to perform feature dimension reduction, uses random Fourier features to perform kernel approximation, and then random Fourier mapping is explicitly applied to the sub-sample to generate the random feature space, making it possible to apply a linear SVM to uniformly approximate to the Gaussian kernel SVM. Finally, the SAE performs joint training with the efficient linear SVM. We studied the effects of an SAE structure and a random Fourier feature on classification performance, and compared that performance with that of other training models, including some without kernel approximation. At the same time, we compare the accuracy of the proposed model with that of other models, which include basic machine learning models and the state-of-the-art models in other literatures. The experimental results demonstrate that the proposed model outperforms the previously proposed methods in terms of classification performance and also reduces the training time. Our model is feasible and works efficiently on large-scale datasets.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Yinlin Cheng ◽  
Mengnan Ma ◽  
Xingyu Li ◽  
Yi Zhou

Abstract Background Diabetic Retinopathy (DR) is the most common and serious microvascular complication in the diabetic population. Using computer-aided diagnosis from the fundus images has become a method of detecting retinal diseases, but the detection of multiple lesions is still a difficult point in current research. Methods This study proposed a multi-label classification method based on the graph convolutional network (GCN), so as to detect 8 types of fundus lesions in color fundus images. We collected 7459 fundus images (1887 left eyes, 1966 right eyes) from 2282 patients (1283 women, 999 men), and labeled 8 types of lesions, laser scars, drusen, cup disc ratio ($$C/D>0.6$$ C / D > 0.6 ), hemorrhages, retinal arteriosclerosis, microaneurysms, hard exudates and soft exudates. We constructed a specialized corpus of the related fundus lesions. A multi-label classification algorithm for fundus images was proposed based on the corpus, and the collected data were trained. Results The average overall F1 Score (OF1) and the average per-class F1 Score (CF1) of the model were 0.808 and 0.792 respectively. The area under the ROC curve (AUC) of our proposed model reached 0.986, 0.954, 0.946, 0.957, 0.952, 0.889, 0.937 and 0.926 for detecting laser scars, drusen, cup disc ratio, hemorrhages, retinal arteriosclerosis, microaneurysms, hard exudates and soft exudates, respectively. Conclusions Our results demonstrated that our proposed model can detect a variety of lesions in the color images of the fundus, which lays a foundation for assisting doctors in diagnosis and makes it possible to carry out rapid and efficient large-scale screening of fundus lesions.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


2021 ◽  
Vol 67 ◽  
pp. 102559
Author(s):  
Deepak Ranjan Nayak ◽  
Dibyasundar Das ◽  
Banshidhar Majhi ◽  
Sulatha V. Bhandary ◽  
U. Rajendra Acharya

Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2010 ◽  
Vol 23 (12) ◽  
pp. 3157-3180 ◽  
Author(s):  
N. Eckert ◽  
H. Baya ◽  
M. Deschatres

Abstract Snow avalanches are natural hazards strongly controlled by the mountain winter climate, but their recent response to climate change has thus far been poorly documented. In this paper, hierarchical modeling is used to obtain robust indexes of the annual fluctuations of runout altitudes. The proposed model includes a possible level shift, and distinguishes common large-scale signals in both mean- and high-magnitude events from the interannual variability. Application to the data available in France over the last 61 winters shows that the mean runout altitude is not different now than it was 60 yr ago, but that snow avalanches have been retreating since 1977. This trend is of particular note for high-magnitude events, which have seen their probability rates halved, a crucial result in terms of hazard assessment. Avalanche control measures, observation errors, and model limitations are insufficient explanations for these trends. On the other hand, strong similarities in the pattern of behavior of the proposed runout indexes and several climate datasets are shown, as well as a consistent evolution of the preferred flow regime. The proposed runout indexes may therefore be usable as indicators of climate change at high altitudes.


Sign in / Sign up

Export Citation Format

Share Document