scholarly journals Improving Accuracy for Diabetes Mellitus Prediction by Using Deepnet

2020 ◽  
Vol 12 (1) ◽  
Author(s):  
Riyad Alshammari ◽  
Noorah Atiyah ◽  
Tahani Daghistani ◽  
Abdulwahhab Alshammari

Diabetes is a salient issue and a significant health care concern for many nations. The forecast for the prevalence of diabetes is on the rise. Hence, building a prediction machine learning model to assist in the identification of diabetic patients is of great interest. This study aims to create a machine learning model that is capable of predicting diabetes with high performance. The following study used the BigML platform to train four machine learning algorithms, namely, Deepnet, Models (decision tree), Ensemble and Logistic Regression, on data sets collected from the Ministry of National Guard Hospital Affairs (MNGHA) in Saudi Arabia between the years of 2013 and 2015. The comparative evaluation criteria for the four algorithms examined included; Accuracy, Precision, Recall, F-measure and PhiCoefficient. Results show that the Deepnet algorithm achieved higher performance compared to other machine learning algorithms based on various evaluation matrices.

Author(s):  
George W Clark ◽  
Todd R Andel ◽  
J Todd McDonald ◽  
Tom Johnsten ◽  
Tom Thomas

Robotic systems are no longer simply built and designed to perform sequential repetitive tasks primarily in a static manufacturing environment. Systems such as autonomous vehicles make use of intricate machine learning algorithms to adapt their behavior to dynamic conditions in their operating environment. These machine learning algorithms provide an additional attack surface for an adversary to exploit in order to perform a cyberattack. Since an attack on robotic systems such as autonomous vehicles have the potential to cause great damage and harm to humans, it is essential that detection and defenses of these attacks be explored. This paper discusses the plausibility of direct and indirect cyberattacks on a machine learning model through the use of a virtual autonomous vehicle operating in a simulation environment using a machine learning model for control. Using this vehicle, this paper proposes various methods of detection of cyberattacks on its machine learning model and discusses possible defense mechanisms to prevent such attacks.


Author(s):  
Mouhammad A Jumaa ◽  
Zeinab Zoghi ◽  
Syed Zaidi ◽  
Nils Mueller‐Kronast ◽  
Osama Zaidat ◽  
...  

Introduction : Machine learning algorithms have emerged as powerful predictive tools in the field of acute ischemic stroke. Here, we examine the predictive performance of a machine algorithm compared to logistic regression for predicting functional outcomes in the prospective Systematic Evaluation of Patients Treated With Neurothrombectomy Devices for Acute Ischemic Stroke (STRATIS) Registry. Methods : The STRATIS Registry was a prospective, observational study of the use of the Solitaire device in acute ischemic stroke patients. Patients with posterior circulation stroke or missing 90‐day mRS were excluding from the analysis. A statistical algorithm (logistic regression) and a machine learning algorithm (decision tree) were implemented on the preprocessed dataset using 10‐fold cross‐validation method where 80% of the data were fed into the models to be trained and the remaining 20% were utilized in the test phase to evaluate the performance of the models for prediction of 90‐day mRS score as dichotomous output. Results : Of the 938 STRATIS patients, 702 with 90‐day mRS were included. The machine learning model outperformed the logistic regression model with a 0.92±0.026 Area Under Curve (AUC) score compared to a 0.88±0.028 AUC score obtained by implementing logistic regression. Conclusions : Our machine learning model delivered improved performance in comparison with the statistical model in predicting 90‐day functional outcome. More studies are needed to understand and externally validate the predictive capacity of our machine learning model.


Polymers ◽  
2021 ◽  
Vol 13 (16) ◽  
pp. 2652
Author(s):  
Matthias Schmid ◽  
Dominik Altmann ◽  
Georg Steinbichler

The optimal machine settings in polymer processing are usually the result of time-consuming and expensive trials. We present a workflow that allows the basic machine settings for the plasticizing process in injection molding to be determined with the help of a simulation-driven machine learning model. Given the material, screw geometry, shot weight, and desired plasticizing time, the model is able to predict the back pressure and screw rotational speed required to achieve good melt quality. We show how data sets can be pre-processed in order to obtain a generalized model that performs well. Various supervised machine learning algorithms were compared, and the best approach was evaluated in experiments on a real machine using the predicted basic machine settings and three different materials. The neural network model that we trained generalized well with an overall absolute mean error of 0.27% and a standard deviation of 0.37% on unseen data (the test set). The experiments showed that the mean absolute errors between the real and desired plasticizing times were sufficiently small, and all predicted operating points achieved good melt quality. Our approach can provide the operators of injection molding machines with predictions of suitable initial operating points and, thus, reduce costs in the planning phase. Further, this approach gives insights into the factors that influence melt quality and can, therefore, increase our understanding of complex plasticizing processes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


2021 ◽  
Author(s):  
Lukman Ismael ◽  
Pejman Rasti ◽  
Florian Bernard ◽  
Philippe Menei ◽  
Aram Ter Minassian ◽  
...  

BACKGROUND The functional MRI (fMRI) is an essential tool for the presurgical planning of brain tumor removal, allowing the identification of functional brain networks in order to preserve the patient’s neurological functions. One fMRI technique used to identify the functional brain network is the resting-state-fMRI (rsfMRI). However, this technique is not routinely used because of the necessity to have a expert reviewer to identify manually each functional networks. OBJECTIVE We aimed to automatize the detection of brain functional networks in rsfMRI data using deep learning and machine learning algorithms METHODS We used the rsfMRI data of 82 healthy patients to test the diagnostic performance of our proposed end-to-end deep learning model to the reference functional networks identified manually by 2 expert reviewers. RESULTS Experiment results show the best performance of 86% correct recognition rate obtained from the proposed deep learning architecture which shows its superiority over other machine learning algorithms that were equally tested for this classification task. CONCLUSIONS The proposed end-to-end deep learning model was the most performant machine learning algorithm. The use of this model to automatize the functional networks detection in rsfMRI may allow to broaden the use of the rsfMRI, allowing the presurgical identification of these networks and thus help to preserve the patient’s neurological status. CLINICALTRIAL Comité de protection des personnes Ouest II, decision reference CPP 2012-25)


Author(s):  
Jia Luo ◽  
Dongwen Yu ◽  
Zong Dai

It is not quite possible to use manual methods to process the huge amount of structured and semi-structured data. This study aims to solve the problem of processing huge data through machine learning algorithms. We collected the text data of the company’s public opinion through crawlers, and use Latent Dirichlet Allocation (LDA) algorithm to extract the keywords of the text, and uses fuzzy clustering to cluster the keywords to form different topics. The topic keywords will be used as a seed dictionary for new word discovery. In order to verify the efficiency of machine learning in new word discovery, algorithms based on association rules, N-Gram, PMI, andWord2vec were used for comparative testing of new word discovery. The experimental results show that the Word2vec algorithm based on machine learning model has the highest accuracy, recall and F-value indicators.


2020 ◽  
pp. 426-429
Author(s):  
Devipriya A ◽  
Brindha D ◽  
Kousalya A

Eye state ID is a sort of basic time-arrangement grouping issue in which it is additionally a problem area in the late exploration. Electroencephalography (EEG) is broadly utilized in a vision state in order to recognize people perception form. Past examination was approved possibility of AI & measurable methodologies of EEG vision state arrangement. This research means to propose novel methodology for EEG vision state distinguishing proof utilizing Gradual Characteristic Learning (GCL) in light of neural organizations. GCL is a novel AI methodology which bit by bit imports and prepares includes individually. Past examinations have confirmed that such a methodology is appropriate for settling various example acknowledgment issues. Nonetheless, in these past works, little examination on GCL zeroed in its application to temporal-arrangement issues. Thusly, it is as yet unclear if GCL will be utilized for adapting the temporal-arrangement issues like EEG vision state characterization. Trial brings about this examination shows that, with appropriate element extraction and highlight requesting, GCL cannot just productively adapt to time-arrangement order issues, yet additionally display better grouping execution as far as characterization mistake rates in correlation with ordinary and some different methodologies. Vision state classification is performed and discussed with KNN classification and accuracy is enriched finally discussed the vision state classification with ensemble machine learning model.


Sign in / Sign up

Export Citation Format

Share Document