scholarly journals Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
A. Sharafeldeen ◽  
M. Elsharkawy ◽  
F. Khalifa ◽  
A. Soliman ◽  
M. Ghazal ◽  
...  

AbstractThis study proposes a novel computer assisted diagnostic (CAD) system for early diagnosis of diabetic retinopathy (DR) using optical coherence tomography (OCT) B-scans. The CAD system is based on fusing novel OCT markers that describe both the morphology/anatomy and the reflectivity of retinal layers to improve DR diagnosis. This system separates retinal layers automatically using a segmentation approach based on an adaptive appearance and their prior shape information. High-order morphological and novel reflectivity markers are extracted from individual segmented layers. Namely, the morphological markers are layer thickness and tortuosity while the reflectivity markers are the 1st-order reflectivity of the layer in addition to local and global high-order reflectivity based on Markov-Gibbs random field (MGRF) and gray-level co-occurrence matrix (GLCM), respectively. The extracted image-derived markers are represented using cumulative distribution function (CDF) descriptors. The constructed CDFs are then described using their statistical measures, i.e., the 10th through 90th percentiles with a 10% increment. For individual layer classification, each extracted descriptor of a given layer is fed to a support vector machine (SVM) classifier with a linear kernel. The results of the four classifiers are then fused using a backpropagation neural network (BNN) to diagnose each retinal layer. For global subject diagnosis, classification outputs (probabilities) of the twelve layers are fused using another BNN to make the final diagnosis of the B-scan. This system is validated and tested on 130 patients, with two scans for both eyes (i.e. 260 OCT images), with a balanced number of normal and DR subjects using different validation metrics: 2-folds, 4-folds, 10-folds, and leave-one-subject-out (LOSO) cross-validation approaches. The performance of the proposed system was evaluated using sensitivity, specificity, F1-score, and accuracy metrics. The system’s performance after the fusion of these different markers showed better performance compared with individual markers and other machine learning fusion methods. Namely, it achieved $$96.15\%$$ 96.15 % , $$99.23\%$$ 99.23 % , $$97.66\%$$ 97.66 % , and $$97.69\%$$ 97.69 % , respectively, using the LOSO cross-validation technique. The reported results, based on the integration of morphology and reflectivity markers and by using state-of-the-art machine learning classifications, demonstrate the ability of the proposed system to diagnose the DR early.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mohamed Elsharkawy ◽  
Ahmed Sharafeldeen ◽  
Fatma Taher ◽  
Ahmed Shalaby ◽  
Ahmed Soliman ◽  
...  

AbstractThe primary goal of this manuscript is to develop a computer assisted diagnostic (CAD) system to assess pulmonary function and risk of mortality in patients with coronavirus disease 2019 (COVID-19). The CAD system processes chest X-ray data and provides accurate, objective imaging markers to assist in the determination of patients with a higher risk of death and thus are more likely to require mechanical ventilation and/or more intensive clinical care.To obtain an accurate stochastic model that has the ability to detect the severity of lung infection, we develop a second-order Markov-Gibbs random field (MGRF) invariant under rigid transformation (translation or rotation of the image) as well as scale (i.e., pixel size). The parameters of the MGRF model are learned automatically, given a training set of X-ray images with affected lung regions labeled. An X-ray input to the system undergoes pre-processing to correct for non-uniformity of illumination and to delimit the boundary of the lung, using either a fully-automated segmentation routine or manual delineation provided by the radiologist, prior to the diagnosis. The steps of the proposed methodology are: (i) estimate the Gibbs energy at several different radii to describe the inhomogeneity in lung infection; (ii) compute the cumulative distribution function (CDF) as a new representation to describe the local inhomogeneity in the infected region of lung; and (iii) input the CDFs to a new neural network-based fusion system to determine whether the severity of lung infection is low or high. This approach is tested on 200 clinical X-rays from 200 COVID-19 positive patients, 100 of whom died and 100 who recovered using multiple training/testing processes including leave-one-subject-out (LOSO), tenfold, fourfold, and twofold cross-validation tests. The Gibbs energy for lung pathology was estimated at three concentric rings of increasing radii. The accuracy and Dice similarity coefficient (DSC) of the system steadily improved as the radius increased. The overall CAD system combined the estimated Gibbs energy information from all radii and achieved a sensitivity, specificity, accuracy, and DSC of 100%, 97% ± 3%, 98% ± 2%, and 98% ± 2%, respectively, by twofold cross validation. Alternative classification algorithms, including support vector machine, random forest, naive Bayes classifier, K-nearest neighbors, and decision trees all produced inferior results compared to the proposed neural network used in this CAD system. The experiments demonstrate the feasibility of the proposed system as a novel tool to objectively assess disease severity and predict mortality in COVID-19 patients. The proposed tool can assist physicians to determine which patients might require more intensive clinical care, such a mechanical respiratory support.


2021 ◽  
Vol 13 (2) ◽  
pp. 1199-1208
Author(s):  
N. Ajaypradeep ◽  
Dr.R. Sasikala

Autism is a developmental disorder which affects cognition, social and behavioural functionalities of a person. When a person is affected by autism spectrum disorder, he/she will exhibit peculiar behaviours and those symptoms initiate from that patient’s childhood. Early diagnosis of autism is an important and challenging task. Behavioural analysis a well known therapeutic practice can be adopted for earlier diagnosis of autism. Machine learning is a computational methodology, which can be applied to a wide range of applications in-order to obtain efficient outputs. At present machine learning is especially applied in medical applications such as disease prediction. In our study we evaluated various machine learning algorithms [(Naive bayes (NB), Support Vector Machines (SVM) and k-Nearest Neighbours (KNN)] with “k-fold” based cross validation for 3 datasets retrieved from the UCI repository. Additionally we validated the effective accuracy of the estimated results using a clustered cross validation strategy. The process of employing the clustered cross validation scrutinises the parameters which contributes more importance in the dataset. The strategy induces hyper parameter tuning which yields trusted results as it involves double validation. On application of the clustered cross validation for a SVM based model, we obtained an accuracy of 99.6% accuracy for autism child dataset.


Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1810
Author(s):  
Toby Collins ◽  
Marianne Maktabi ◽  
Manuel Barberio ◽  
Valentin Bencteux ◽  
Boris Jansen-Winkeln ◽  
...  

There are approximately 1.8 million diagnoses of colorectal cancer, 1 million diagnoses of stomach cancer, and 0.6 million diagnoses of esophageal cancer each year globally. An automatic computer-assisted diagnostic (CAD) tool to rapidly detect colorectal and esophagogastric cancer tissue in optical images would be hugely valuable to a surgeon during an intervention. Based on a colon dataset with 12 patients and an esophagogastric dataset of 10 patients, several state-of-the-art machine learning methods have been trained to detect cancer tissue using hyperspectral imaging (HSI), including Support Vector Machines (SVM) with radial basis function kernels, Multi-Layer Perceptrons (MLP) and 3D Convolutional Neural Networks (3DCNN). A leave-one-patient-out cross-validation (LOPOCV) with and without combining these sets was performed. The ROC-AUC score of the 3DCNN was slightly higher than the MLP and SVM with a difference of 0.04 AUC. The best performance was achieved with the 3DCNN for colon cancer and esophagogastric cancer detection with a high ROC-AUC of 0.93. The 3DCNN also achieved the best DICE scores of 0.49 and 0.41 on the colon and esophagogastric datasets, respectively. These scores were significantly improved using a patient-specific decision threshold to 0.58 and 0.51, respectively. This indicates that, in practical use, an HSI-based CAD system using an interactive decision threshold is likely to be valuable. Experiments were also performed to measure the benefits of combining the colorectal and esophagogastric datasets (22 patients), and this yielded significantly better results with the MLP and SVM models.


2021 ◽  
Vol 15 ◽  
Author(s):  
Mona Matar ◽  
Suleyman A. Gokoglu ◽  
Matthew T. Prelich ◽  
Christopher A. Gallo ◽  
Asad K. Iqbal ◽  
...  

This research uses machine-learned computational analyses to predict the cognitive performance impairment of rats induced by irradiation. The experimental data in the analyses is from a rodent model exposed to ≤15 cGy of individual galactic cosmic radiation (GCR) ions: 4He, 16O, 28Si, 48Ti, or 56Fe, expected for a Lunar or Mars mission. This work investigates rats at a subject-based level and uses performance scores taken before irradiation to predict impairment in attentional set-shifting (ATSET) data post-irradiation. Here, the worst performing rats of the control group define the impairment thresholds based on population analyses via cumulative distribution functions, leading to the labeling of impairment for each subject. A significant finding is the exhibition of a dose-dependent increasing probability of impairment for 1 to 10 cGy of 28Si or 56Fe in the simple discrimination (SD) stage of the ATSET, and for 1 to 10 cGy of 56Fe in the compound discrimination (CD) stage. On a subject-based level, implementing machine learning (ML) classifiers such as the Gaussian naïve Bayes, support vector machine, and artificial neural networks identifies rats that have a higher tendency for impairment after GCR exposure. The algorithms employ the experimental prescreen performance scores as multidimensional input features to predict each rodent’s susceptibility to cognitive impairment due to space radiation exposure. The receiver operating characteristic and the precision-recall curves of the ML models show a better prediction of impairment when 56Fe is the ion in question in both SD and CD stages. They, however, do not depict impairment due to 4He in SD and 28Si in CD, suggesting no dose-dependent impairment response in these cases. One key finding of our study is that prescreen performance scores can be used to predict the ATSET performance impairments. This result is significant to crewed space missions as it supports the potential of predicting an astronaut’s impairment in a specific task before spaceflight through the implementation of appropriately trained ML tools. Future research can focus on constructing ML ensemble methods to integrate the findings from the methodologies implemented in this study for more robust predictions of cognitive decrements due to space radiation exposure.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yao Huimin

With the development of cloud computing and distributed cluster technology, the concept of big data has been expanded and extended in terms of capacity and value, and machine learning technology has also received unprecedented attention in recent years. Traditional machine learning algorithms cannot solve the problem of effective parallelization, so a parallelization support vector machine based on Spark big data platform is proposed. Firstly, the big data platform is designed with Lambda architecture, which is divided into three layers: Batch Layer, Serving Layer, and Speed Layer. Secondly, in order to improve the training efficiency of support vector machines on large-scale data, when merging two support vector machines, the “special points” other than support vectors are considered, that is, the points where the nonsupport vectors in one subset violate the training results of the other subset, and a cross-validation merging algorithm is proposed. Then, a parallelized support vector machine based on cross-validation is proposed, and the parallelization process of the support vector machine is realized on the Spark platform. Finally, experiments on different datasets verify the effectiveness and stability of the proposed method. Experimental results show that the proposed parallelized support vector machine has outstanding performance in speed-up ratio, training time, and prediction accuracy.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0257901
Author(s):  
Yanjing Bi ◽  
Chao Li ◽  
Yannick Benezeth ◽  
Fan Yang

Phoneme pronunciations are usually considered as basic skills for learning a foreign language. Practicing the pronunciations in a computer-assisted way is helpful in a self-directed or long-distance learning environment. Recent researches indicate that machine learning is a promising method to build high-performance computer-assisted pronunciation training modalities. Many data-driven classifying models, such as support vector machines, back-propagation networks, deep neural networks and convolutional neural networks, are increasingly widely used for it. Yet, the acoustic waveforms of phoneme are essentially modulated from the base vibrations of vocal cords, and this fact somehow makes the predictors collinear, distorting the classifying models. A commonly-used solution to address this issue is to suppressing the collinearity of predictors via partial least square regressing algorithm. It allows to obtain high-quality predictor weighting results via predictor relationship analysis. However, as a linear regressor, the classifiers of this type possess very simple topology structures, constraining the universality of the regressors. For this issue, this paper presents an heterogeneous phoneme recognition framework which can further benefit the phoneme pronunciation diagnostic tasks by combining the partial least square with support vector machines. A French phoneme data set containing 4830 samples is established for the evaluation experiments. The experiments of this paper demonstrates that the new method improves the accuracy performance of the phoneme classifiers by 0.21 − 8.47% comparing to state-of-the-arts with different data training data density.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5362 ◽  
Author(s):  
Luca Antognoli ◽  
Sara Moccia ◽  
Lucia Migliorelli ◽  
Sara Casaccia ◽  
Lorenzo Scalise ◽  
...  

Background: Heartbeat detection is a crucial step in several clinical fields. Laser Doppler Vibrometer (LDV) is a promising non-contact measurement for heartbeat detection. The aim of this work is to assess whether machine learning can be used for detecting heartbeat from the carotid LDV signal. Methods: The performances of Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) were compared using the leave-one-subject-out cross-validation as the testing protocol in an LDV dataset collected from 28 subjects. The classification was conducted on LDV signal windows, which were labeled as beat, if containing a beat, or no-beat, otherwise. The labeling procedure was performed using electrocardiography as the gold standard. Results: For the beat class, the f1-score (f1) values were 0.93, 0.93, 0.95, 0.96 for RF, DT, KNN and SVM, respectively. No statistical differences were found between the classifiers. When testing the SVM on the full-length (10 min long) LDV signals, to simulate a real-world application, we achieved a median macro-f1 of 0.76. Conclusions: Using machine learning for heartbeat detection from carotid LDV signals showed encouraging results, representing a promising step in the field of contactless cardiovascular signal analysis.


2017 ◽  
Vol 58 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Koujiro Ikushima ◽  
Hidetaka Arimura ◽  
Ze Jin ◽  
Hidetake Yabu-uchi ◽  
Jumpei Kuwazuru ◽  
...  

Abstract We have proposed a computer-assisted framework for machine-learning–based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the ‘degree of GTV’ for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions.


2021 ◽  
Vol 8 (2) ◽  
pp. 311
Author(s):  
Mohammad Farid Naufal

<p class="Abstrak">Cuaca merupakan faktor penting yang dipertimbangkan untuk berbagai pengambilan keputusan. Klasifikasi cuaca manual oleh manusia membutuhkan waktu yang lama dan inkonsistensi. <em>Computer vision</em> adalah cabang ilmu yang digunakan komputer untuk mengenali atau melakukan klasifikasi citra. Hal ini dapat membantu pengembangan <em>self autonomous machine</em> agar tidak bergantung pada koneksi internet dan dapat melakukan kalkulasi sendiri secara <em>real time</em>. Terdapat beberapa algoritma klasifikasi citra populer yaitu K-Nearest Neighbors (KNN), Support Vector Machine (SVM), dan Convolutional Neural Network (CNN). KNN dan SVM merupakan algoritma klasifikasi dari <em>Machine Learning</em> sedangkan CNN merupakan algoritma klasifikasi dari Deep Neural Network. Penelitian ini bertujuan untuk membandingkan performa dari tiga algoritma tersebut sehingga diketahui berapa gap performa diantara ketiganya. Arsitektur uji coba yang dilakukan adalah menggunakan 5 cross validation. Beberapa parameter digunakan untuk mengkonfigurasikan algoritma KNN, SVM, dan CNN. Dari hasil uji coba yang dilakukan CNN memiliki performa terbaik dengan akurasi 0.942, precision 0.943, recall 0.942, dan F1 Score 0.942.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Weather is an important factor that is considered for various decision making. Manual weather classification by humans is time consuming and inconsistent. Computer vision is a branch of science that computers use to recognize or classify images. This can help develop self-autonomous machines so that they are not dependent on an internet connection and can perform their own calculations in real time. There are several popular image classification algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional Neural Network (CNN). KNN and SVM are Machine Learning classification algorithms, while CNN is a Deep Neural Networks classification algorithm. This study aims to compare the performance of that three algorithms so that the performance gap between the three is known. The test architecture is using 5 cross validation. Several parameters are used to configure the KNN, SVM, and CNN algorithms. From the test results conducted by CNN, it has the best performance with 0.942 accuracy, 0.943 precision, 0.942 recall, and F1 Score 0.942.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


2021 ◽  
Author(s):  
Jeniffer Luz ◽  
Scenio De Araujo ◽  
Caio Abreu ◽  
Juvenal Silva Neto ◽  
Carlos Gulo

Since the beginning of the COVID-19 outbreak, the scientific communityhas been making efforts in several areas, either by seekingvaccines or improving the early diagnosis of the disease to contributeto the fight against the SARS-CoV-2 virus. The use of X-rayimaging exams becomes an ally in early diagnosis and has been thesubject of research by the medical image processing and analysiscommunity. Although the diagnosis of diseases by image is a consolidatedresearch theme, the proposed approach aims to: a) applystate-of-the-art machine learning techniques in X-ray images forthe COVID-19 diagnosis; b) identify COVID-19 features in imagingexamination; c) to develop an Artificial Intelligence model toreduce the disease diagnosis time; in addition to demonstrating thepotential of the Artificial Intelligence area as an incentive for theformation of critical mass and encouraging research in machinelearning and processing and analysis of medical images in the Stateof Mato Grosso, in Brazil. Initial results were obtained from experimentscarried out with the SVM (Support Vector Machine) classifier,induced on a publicly available image dataset from Kaggle repository.Six attributes suggested by Haralick, calculated on the graylevel co-occurrence matrix, were used to represent the images. Theprediction model was able to achieve 82.5% accuracy in recognizingthe disease. The next stage of the studies includes the study of deeplearning models.


Sign in / Sign up

Export Citation Format

Share Document