scholarly journals CNN with Multiple Input for automatic glaucoma assessment using Fundus Images

Author(s):  
Abdelali ELMOUFIDI ◽  
Said Jai-andaloussi

Abstract In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is, in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs + residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last are very long; therefore, Principal Component Analyses (PCA) are used to reduce features dimensions. Obtained bags of features are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall accuracy of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% are obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Against an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% are obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. Obtained experimental results from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.

Author(s):  
Abdelali Elmoufidi ◽  
Ayoub Skouta ◽  
Said Jai-Andaloussi ◽  
Ouail Ouchetto

In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial intelligence has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs+residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last very long; therefore, Principal Component Analysis (PCA) are used to reduce features dimensions. The bags of features obtained are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall precision of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% is obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Again an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% is obtained in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. The experimental results obtained from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.


2021 ◽  
pp. 6787-6794
Author(s):  
Anisha Rebinth, Dr. S. Mohan Kumar

An automated Computer Aided Diagnosis (CAD) system for glaucoma diagnosis using fundus images is developed. The various glaucoma image classification schemes using the supervised and unsupervised learning approaches are reviewed. The research paper involves three stages of glaucoma disease diagnosis. First, the pre-processing stage the texture features of the fundus image is recorded with a two-dimensional Gabor filter at various sizes and orientations. The image features are generated using higher order statistical characteristics, and then Principal Component Analysis (PCA) is used to select and reduce the dimension of the image features. For the performance study, the Gabor filter based features are extracted from the RIM-ONE and HRF database images, and then Support Vector Machine (SVM) classifier is used for classification. Final stage utilizes the SVM classifier with the Radial Basis Function (RBF) kernel learning technique for the efficient classification of glaucoma disease with accuracy 90%.


Author(s):  
Jonnadula Dr.J.Harikiran Harikiran

In this paper, a novel approach for hyperspectral image classification technique is presented using principal component analysis (PCA), bidimensional empirical mode decomposition (BEMD) and support vector machines (SVM). In this process, using PCA feature extraction technique on Hyperspectral Dataset, the first principal component is extracted. This component is supplied as input to BEMD algorithm, which divides the component into four parts, the first three parts represents intrensic mode functions (IMF) and last part shows the residue. These BIMFs and residue image is further taken as input to the SVM for classification. The results of experiments on two popular datasets of hyperspectral remote sensing scenes represent that the proposed-model offers a competitive analyticalperformance in comparison to some established methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Zhongbin Wang ◽  
Bin Liang ◽  
Lei Si ◽  
Kuangwei Tong ◽  
Chao Tan

The recognition of shearer cutting state is the key technology to realize the intelligent control of the shearer, which has become a highly difficult subject concerned by the world. This paper takes the sound signal as analytic objects and proposes a novel recognition method based on the combination of variational mode decomposition (VMD), principal component analysis method (PCA), and least square support vector machine (LSSVM). VMD can decompose a signal into various modes by using calculus of variation and effectively avoid the false component and mode mixing problems. On this basis, an improved gravitational search algorithm (IGSA) is designed by using the position update mechanism of Levy flight strategy to find the optimal parameter combination of VMD. Then, the feature extraction is achieved by calculating the envelope entropy and kurtosis of the decomposed intrinsic mode functions (IMFs). To avoid dimensional disasters and reinforce the classification performance, PCA is introduced to choose useful features, and the LSSVM-based classifier is reasonably constructed. Finally, the experimental results indicate that the proposed method is more feasible and superior in the recognition of shearer cutting states.


2019 ◽  
Vol 19 (5) ◽  
pp. 1453-1470
Author(s):  
Ali Dibaj ◽  
Mir Mohammad Ettefagh ◽  
Reza Hassannejad ◽  
Mir Biuok Ehghaghi

Variational mode decomposition is a powerful signal processing technique that can adaptively decompose a multi-component signal into a number of modes, via solving an optimization problem. The optimal performance of this method in signal decomposition and avoiding of the mode mixing problem strictly relies on the true selection of decomposition parameters, that is, the number of extracted modes ( K) and the mode frequency bandwidth control parameter ( α). In the literature, the optimal values of these parameters are achieved by evaluating fault-related indices like kurtosis, but such an index is inefficient in judging the decomposition of healthy (without fault-related components), low-defective, and high-noise signals. In this research, a novel method called fine-tuned variational mode decomposition is proposed to determine the optimal values of decomposition parameters K and α, by judging the adaptive indices. In this proposed method, the optimal values of these parameters are obtained by minimizing the mean bandwidth of the extracted modes. In order to achieve these optimal values, the mean correlation coefficients between the adjacent modes and the energy loss coefficient between the original signal and the reconstructed signal, should not exceed of defined thresholds for optimal values. The proposed method is applied to the simulation signal and experimental ones collected from the automobile gearbox system. Comparing this method with those in the literature exhibits its higher effectiveness in the true decomposition of signals with different natures. It is also shown that using the proposed method for signal decomposition is able to correctly classify the healthy and defective states of the gearbox system alongside the principal component analysis method and support vector machine classifier.


Author(s):  
Shaojiang Dong ◽  
Dihua Sun ◽  
Baoping Tang ◽  
Zhengyuan Gao ◽  
Yingrui Wang ◽  
...  

In order to effectively recognize the bearing’s running state, a new method based on kernel principal component analysis (KPCA) and the Morlet wavelet kernel support vector machine (MWSVM) was proposed. First, the gathered vibration signals were decomposed by the empirical mode decomposition (EMD) to obtain the corresponding intrinsic mode function (IMF). The EMD energy entropy that includes dominant fault information is defined as the characteristic features. However, the extracted features remained high-dimensional, and excessive redundant information still existed. Therefore, the nonlinear feature extraction method KPCA was introduced to extract the characteristic features and to reduce the dimension. The extracted characteristic features were inputted into the MWSVM to train and construct the running state identification model, and the bearing’s running state identification was thereby realized. Cases of test and actual were analyzed. The results validate the effectiveness of the proposed algorithm.


2020 ◽  
Vol 16 (8) ◽  
pp. 1088-1105
Author(s):  
Nafiseh Vahedi ◽  
Majid Mohammadhosseini ◽  
Mehdi Nekoei

Background: The poly(ADP-ribose) polymerases (PARP) is a nuclear enzyme superfamily present in eukaryotes. Methods: In the present report, some efficient linear and non-linear methods including multiple linear regression (MLR), support vector machine (SVM) and artificial neural networks (ANN) were successfully used to develop and establish quantitative structure-activity relationship (QSAR) models capable of predicting pEC50 values of tetrahydropyridopyridazinone derivatives as effective PARP inhibitors. Principal component analysis (PCA) was used to a rational division of the whole data set and selection of the training and test sets. A genetic algorithm (GA) variable selection method was employed to select the optimal subset of descriptors that have the most significant contributions to the overall inhibitory activity from the large pool of calculated descriptors. Results: The accuracy and predictability of the proposed models were further confirmed using crossvalidation, validation through an external test set and Y-randomization (chance correlations) approaches. Moreover, an exhaustive statistical comparison was performed on the outputs of the proposed models. The results revealed that non-linear modeling approaches, including SVM and ANN could provide much more prediction capabilities. Conclusion: Among the constructed models and in terms of root mean square error of predictions (RMSEP), cross-validation coefficients (Q2 LOO and Q2 LGO), as well as R2 and F-statistical value for the training set, the predictive power of the GA-SVM approach was better. However, compared with MLR and SVM, the statistical parameters for the test set were more proper using the GA-ANN model.


2020 ◽  
Vol 15 ◽  
Author(s):  
Shuwen Zhang ◽  
Qiang Su ◽  
Qin Chen

Abstract: Major animal diseases pose a great threat to animal husbandry and human beings. With the deepening of globalization and the abundance of data resources, the prediction and analysis of animal diseases by using big data are becoming more and more important. The focus of machine learning is to make computers learn how to learn from data and use the learned experience to analyze and predict. Firstly, this paper introduces the animal epidemic situation and machine learning. Then it briefly introduces the application of machine learning in animal disease analysis and prediction. Machine learning is mainly divided into supervised learning and unsupervised learning. Supervised learning includes support vector machines, naive bayes, decision trees, random forests, logistic regression, artificial neural networks, deep learning, and AdaBoost. Unsupervised learning has maximum expectation algorithm, principal component analysis hierarchical clustering algorithm and maxent. Through the discussion of this paper, people have a clearer concept of machine learning and understand its application prospect in animal diseases.


Sign in / Sign up

Export Citation Format

Share Document