Detection and defense of cyberattacks on the machine learning control of robotic systems

Author(s):  
George W Clark ◽  
Todd R Andel ◽  
J Todd McDonald ◽  
Tom Johnsten ◽  
Tom Thomas

Robotic systems are no longer simply built and designed to perform sequential repetitive tasks primarily in a static manufacturing environment. Systems such as autonomous vehicles make use of intricate machine learning algorithms to adapt their behavior to dynamic conditions in their operating environment. These machine learning algorithms provide an additional attack surface for an adversary to exploit in order to perform a cyberattack. Since an attack on robotic systems such as autonomous vehicles have the potential to cause great damage and harm to humans, it is essential that detection and defenses of these attacks be explored. This paper discusses the plausibility of direct and indirect cyberattacks on a machine learning model through the use of a virtual autonomous vehicle operating in a simulation environment using a machine learning model for control. Using this vehicle, this paper proposes various methods of detection of cyberattacks on its machine learning model and discusses possible defense mechanisms to prevent such attacks.

Author(s):  
Mouhammad A Jumaa ◽  
Zeinab Zoghi ◽  
Syed Zaidi ◽  
Nils Mueller‐Kronast ◽  
Osama Zaidat ◽  
...  

Introduction : Machine learning algorithms have emerged as powerful predictive tools in the field of acute ischemic stroke. Here, we examine the predictive performance of a machine algorithm compared to logistic regression for predicting functional outcomes in the prospective Systematic Evaluation of Patients Treated With Neurothrombectomy Devices for Acute Ischemic Stroke (STRATIS) Registry. Methods : The STRATIS Registry was a prospective, observational study of the use of the Solitaire device in acute ischemic stroke patients. Patients with posterior circulation stroke or missing 90‐day mRS were excluding from the analysis. A statistical algorithm (logistic regression) and a machine learning algorithm (decision tree) were implemented on the preprocessed dataset using 10‐fold cross‐validation method where 80% of the data were fed into the models to be trained and the remaining 20% were utilized in the test phase to evaluate the performance of the models for prediction of 90‐day mRS score as dichotomous output. Results : Of the 938 STRATIS patients, 702 with 90‐day mRS were included. The machine learning model outperformed the logistic regression model with a 0.92±0.026 Area Under Curve (AUC) score compared to a 0.88±0.028 AUC score obtained by implementing logistic regression. Conclusions : Our machine learning model delivered improved performance in comparison with the statistical model in predicting 90‐day functional outcome. More studies are needed to understand and externally validate the predictive capacity of our machine learning model.


2020 ◽  
Vol 12 (1) ◽  
Author(s):  
Riyad Alshammari ◽  
Noorah Atiyah ◽  
Tahani Daghistani ◽  
Abdulwahhab Alshammari

Diabetes is a salient issue and a significant health care concern for many nations. The forecast for the prevalence of diabetes is on the rise. Hence, building a prediction machine learning model to assist in the identification of diabetic patients is of great interest. This study aims to create a machine learning model that is capable of predicting diabetes with high performance. The following study used the BigML platform to train four machine learning algorithms, namely, Deepnet, Models (decision tree), Ensemble and Logistic Regression, on data sets collected from the Ministry of National Guard Hospital Affairs (MNGHA) in Saudi Arabia between the years of 2013 and 2015. The comparative evaluation criteria for the four algorithms examined included; Accuracy, Precision, Recall, F-measure and PhiCoefficient. Results show that the Deepnet algorithm achieved higher performance compared to other machine learning algorithms based on various evaluation matrices.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Author(s):  
Lukman Ismael ◽  
Pejman Rasti ◽  
Florian Bernard ◽  
Philippe Menei ◽  
Aram Ter Minassian ◽  
...  

BACKGROUND The functional MRI (fMRI) is an essential tool for the presurgical planning of brain tumor removal, allowing the identification of functional brain networks in order to preserve the patient’s neurological functions. One fMRI technique used to identify the functional brain network is the resting-state-fMRI (rsfMRI). However, this technique is not routinely used because of the necessity to have a expert reviewer to identify manually each functional networks. OBJECTIVE We aimed to automatize the detection of brain functional networks in rsfMRI data using deep learning and machine learning algorithms METHODS We used the rsfMRI data of 82 healthy patients to test the diagnostic performance of our proposed end-to-end deep learning model to the reference functional networks identified manually by 2 expert reviewers. RESULTS Experiment results show the best performance of 86% correct recognition rate obtained from the proposed deep learning architecture which shows its superiority over other machine learning algorithms that were equally tested for this classification task. CONCLUSIONS The proposed end-to-end deep learning model was the most performant machine learning algorithm. The use of this model to automatize the functional networks detection in rsfMRI may allow to broaden the use of the rsfMRI, allowing the presurgical identification of these networks and thus help to preserve the patient’s neurological status. CLINICALTRIAL Comité de protection des personnes Ouest II, decision reference CPP 2012-25)


Author(s):  
Jia Luo ◽  
Dongwen Yu ◽  
Zong Dai

It is not quite possible to use manual methods to process the huge amount of structured and semi-structured data. This study aims to solve the problem of processing huge data through machine learning algorithms. We collected the text data of the company’s public opinion through crawlers, and use Latent Dirichlet Allocation (LDA) algorithm to extract the keywords of the text, and uses fuzzy clustering to cluster the keywords to form different topics. The topic keywords will be used as a seed dictionary for new word discovery. In order to verify the efficiency of machine learning in new word discovery, algorithms based on association rules, N-Gram, PMI, andWord2vec were used for comparative testing of new word discovery. The experimental results show that the Word2vec algorithm based on machine learning model has the highest accuracy, recall and F-value indicators.


2020 ◽  
pp. 426-429
Author(s):  
Devipriya A ◽  
Brindha D ◽  
Kousalya A

Eye state ID is a sort of basic time-arrangement grouping issue in which it is additionally a problem area in the late exploration. Electroencephalography (EEG) is broadly utilized in a vision state in order to recognize people perception form. Past examination was approved possibility of AI & measurable methodologies of EEG vision state arrangement. This research means to propose novel methodology for EEG vision state distinguishing proof utilizing Gradual Characteristic Learning (GCL) in light of neural organizations. GCL is a novel AI methodology which bit by bit imports and prepares includes individually. Past examinations have confirmed that such a methodology is appropriate for settling various example acknowledgment issues. Nonetheless, in these past works, little examination on GCL zeroed in its application to temporal-arrangement issues. Thusly, it is as yet unclear if GCL will be utilized for adapting the temporal-arrangement issues like EEG vision state characterization. Trial brings about this examination shows that, with appropriate element extraction and highlight requesting, GCL cannot just productively adapt to time-arrangement order issues, yet additionally display better grouping execution as far as characterization mistake rates in correlation with ordinary and some different methodologies. Vision state classification is performed and discussed with KNN classification and accuracy is enriched finally discussed the vision state classification with ensemble machine learning model.


2021 ◽  
Vol 2070 (1) ◽  
pp. 012243
Author(s):  
A Varun ◽  
Mechiri Sandeep Kumar ◽  
Karthik Murumulla ◽  
Tatiparthi Sathvik

Abstract Lathe turning is one of the manufacturing sector’s most basic and important operations. From small businesses to large corporations, optimising machining operations is a key priority. Cooling systems in machining have an important role in determining surface roughness. The machine learning model under discussion assesses the surface roughness of lathe turned surfaces for a variety of materials. To forecast surface roughness, the machine learning model is trained using machining parameters, material characteristics, tool properties, and cooling conditions such as dry, MQL, and hybrid nano particle mixed MQL. Mixing with appropriate nano particles such as copper, aluminium, etc. may significantly improve cooling system heat absorption. To create a data collection for training and testing the model, many standard journals and publications are used. Surface roughness varies with work parameter combinations. In MATLAB, a Gaussian Process Regression (GPR) method will be utilised to construct a model and predict surface roughness. To improve prediction outcomes and make the model more flexible, data from a variety of publications was included. Some characteristics were omitted in order to minimise data noise. Different statistical factors will be explored to predict surface roughness.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Max Schneckenburger ◽  
Sven Höfler ◽  
Luis Garcia ◽  
Rui Almeida ◽  
Rainer Börret

Abstract Robot polishing is increasingly being used in the production of high-end glass workpieces such as astronomy mirrors, lithography lenses, laser gyroscopes or high-precision coordinate measuring machines. The quality of optical components such as lenses or mirrors can be described by shape errors and surface roughness. Whilst the trend towards sub nanometre level surfaces finishes and features progresses, matching both form and finish coherently in complex parts remains a major challenge. With increasing optic sizes, the stability of the polishing process becomes more and more important. If not empirically known, the optical surface must be measured after each polishing step. One approach is to mount sensors on the polishing head in order to measure process-relevant quantities. On the basis of these data, machine learning algorithms can be applied for surface value prediction. Due to the modification of the polishing head by the installation of sensors and the resulting process influences, the first machine learning model could only make removal predictions with insufficient accuracy. The aim of this work is to show a polishing head optimised for the sensors, which is coupled with a machine learning model in order to predict the material removal and failure of the polishing head during robot polishing. The artificial neural network is developed in the Python programming language using the Keras deep learning library. It starts with a simple network architecture and common training parameters. The model will then be optimised step-by-step using different methods and optimised in different steps. The data collected by a design of experiments with the sensor-integrated glass polishing head are used to train the machine learning model and to validate the results. The neural network achieves a prediction accuracy of the material removal of 99.22%. Article highlights First machine learning model application for robot polishing of optical glass ceramics The polishing process is influenced by a large number of different process parameters. Machine learning can be used to adjust any process parameter and predict the change in material removal with a certain probability. For a trained model,empirical experiments are no longer necessary Equipping a polishing head with sensors, which provides the possibility for 100% control


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Muhammad Muneeb ◽  
Andreas Henschel

Abstract Background Genotype–phenotype predictions are of great importance in genetics. These predictions can help to find genetic mutations causing variations in human beings. There are many approaches for finding the association which can be broadly categorized into two classes, statistical techniques, and machine learning. Statistical techniques are good for finding the actual SNPs causing variation where Machine Learning techniques are good where we just want to classify the people into different categories. In this article, we examined the Eye-color and Type-2 diabetes phenotype. The proposed technique is a hybrid approach consisting of some parts from statistical techniques and remaining from Machine learning. Results The main dataset for Eye-color phenotype consists of 806 people. 404 people have Blue-Green eyes where 402 people have Brown eyes. After preprocessing we generated 8 different datasets, containing different numbers of SNPs, using the mutation difference and thresholding at individual SNP. We calculated three types of mutation at each SNP no mutation, partial mutation, and full mutation. After that data is transformed for machine learning algorithms. We used about 9 classifiers, RandomForest, Extreme Gradient boosting, ANN, LSTM, GRU, BILSTM, 1DCNN, ensembles of ANN, and ensembles of LSTM which gave the best accuracy of 0.91, 0.9286, 0.945, 0.94, 0.94, 0.92, 0.95, and 0.96% respectively. Stacked ensembles of LSTM outperformed other algorithms for 1560 SNPs with an overall accuracy of 0.96, AUC = 0.98 for brown eyes, and AUC = 0.97 for Blue-Green eyes. The main dataset for Type-2 diabetes consists of 107 people where 30 people are classified as cases and 74 people as controls. We used different linear threshold to find the optimal number of SNPs for classification. The final model gave an accuracy of 0.97%. Conclusion Genotype–phenotype predictions are very useful especially in forensic. These predictions can help to identify SNP variant association with traits and diseases. Given more datasets, machine learning model predictions can be increased. Moreover, the non-linearity in the Machine learning model and the combination of SNPs Mutations while training the model increases the prediction. We considered binary classification problems but the proposed approach can be extended to multi-class classification.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Lingxiao He ◽  
Lei Luo ◽  
Xiaoling Hou ◽  
Dengbin Liao ◽  
Ran Liu ◽  
...  

Abstract Background Venous thromboembolism (VTE) is a common complication of hospitalized trauma patients and has an adverse impact on patient outcomes. However, there is still a lack of appropriate tools for effectively predicting VTE for trauma patients. We try to verify the accuracy of the Caprini score for predicting VTE in trauma patients, and further improve the prediction through machine learning algorithms. Methods We retrospectively reviewed emergency trauma patients who were admitted to a trauma center in a tertiary hospital from September 2019 to March 2020. The data in the patient’s electronic health record (EHR) and the Caprini score were extracted, combined with multiple feature screening methods and the random forest (RF) algorithm to constructs the VTE prediction model, and compares the prediction performance of (1) using only Caprini score; (2) using EHR data to build a machine learning model; (3) using EHR data and Caprini score to build a machine learning model. True Positive Rate (TPR), False Positive Rate (FPR), Area Under Curve (AUC), accuracy, and precision were reported. Results The Caprini score shows a good VTE prediction effect on the trauma hospitalized population when the cut-off point is 11 (TPR = 0.667, FPR = 0.227, AUC = 0.773), The best prediction model is LASSO+RF model combined with Caprini Score and other five features extracted from EHR data (TPR = 0.757, FPR = 0.290, AUC = 0.799). Conclusion The Caprini score has good VTE prediction performance in trauma patients, and the use of machine learning methods can further improve the prediction performance.


Sign in / Sign up

Export Citation Format

Share Document