Functional verification of cyber-physical systems containing machine-learnt components

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Farzaneh Moradkhani ◽  
Martin Fränzle

Abstract Functional architectures of cyber-physical systems increasingly comprise components that are generated by training and machine learning rather than by more traditional engineering approaches, as necessary in safety-critical application domains, poses various unsolved challenges. Commonly used computational structures underlying machine learning, like deep neural networks, still lack scalable automatic verification support. Due to size, non-linearity, and non-convexity, neural network verification is a challenge to state-of-art Mixed Integer linear programming (MILP) solvers and satisfiability modulo theories (SMT) solvers [2], [3]. In this research, we focus on artificial neural network with activation functions beyond the Rectified Linear Unit (ReLU). We are thus leaving the area of piecewise linear function supported by the majority of SMT solvers and specialized solvers for Artificial Neural Networks (ANNs), the successful like Reluplex solver [1]. A major part of this research is using the SMT solver iSAT [4] which aims at solving complex Boolean combinations of linear and non-linear constraint formulas (including transcendental functions), and therefore is suitable to verify the safety properties of a specific kind of neural network known as Multi-Layer Perceptron (MLP) which contain non-linear activation functions.

Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2022 ◽  
pp. 1-30
Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


Author(s):  
Senthil Kumar Arumugasamy ◽  
Zainal Ahmad

Process control in the field of chemical engineering has always been a challenging task for the chemical engineers. Hence, the majority of processes found in the chemical industries are non-linear and in these cases the performance of the linear models can be inadequate. Recently a promising alternative modelling technique, artificial neural networks (ANNs), has found numerous applications in representing non-linear functional relationships between variables. A feedforward multi-layered neural network is a highly connected set of elementary non-linear neurons. Model-based control techniques were developed to obtain tighter control. Many model-based control schemes have been proposed to incorporate a process model into a control system. Among them, model predictive control (MPC) is the most common scheme. MPC is a general and mathematically feasible scheme to integrate our knowledge about a target, process controller design and operation, which allows flexible and efficient exploitation of our understanding of a target, and thus produces optimal performance of a system under various constraints. The need to handle some difficult control problems has led us to use ANN in MPC and has recently attracted a great deal of attention. The efficacy of the neural predictive control with the ability to perform comparably to the non linear neural network strategy in both set point tracking and disturbance rejection proves to have less computation expense for the neural predictive control. The neural network model predictive control (NNMPC) method has less perturbations and oscillations when dealing with noise as compared to the PI controllers.


2013 ◽  
Vol 07 (02) ◽  
pp. 147-155
Author(s):  
JOSEPH R. BARR ◽  
W. KURT DOBSON

Artificial neural networks, due to their ability to find the underlying model even in complex highly nonlinear and highly coupled problems, have found significant use as prediction engines in many domains. However, in problems where the input space is of high dimensionality, there is the unsolved problem of reducing dimensionality in some optimal way such that Shannon information important to the prediction is preserved. The important Shannon information may be a subset of total information with an unknown partition, unknown coupling and linear or nonlinear in nature. Solving this problem is an important step in classes of machine learning problems and many data mining applications. This paper describes a semi-automatic algorithm that was developed over a 5-year period while solving problems with increasing dimensionality and difficulty in (a) flow prediction for a magnetically levitated artificial heart (13 dimensions), (b) simultaneous chemical identification/concentration in gas chromatography (22 detection dimensions with wavelet compressed time series of 180,000 points), and finally in (c) financial analytics portfolio prediction in credit card and sub-prime debt problems (80 to 300 dimensions of sparse data with a portfolio value of approximately US$300,000,000.00). The algorithm develops a map of input space combinations and their importance to the prediction. This information is used directly to construct the optimal neural network topology for a given error performance. Importantly, the algorithm also produces information that shows whether the space between input nodes is linear or nonlinear; an important parameter in determining the number of training points required in the reduced dimensionality of the training set. Software was developed in the MatLAB environment using the Artificial Neural Network Toolbox, Parallel and Distributed Computing toolboxes, and runs on Windows or Linux based supercomputers. Trained neural networks can be compiled and linked to server applications and run on normal servers or clusters for transaction or web based processing. In this paper, application of the algorithm to two separate financial analytics prediction problems with large dimensionality and sparse data sets are shown. The algorithm is an important development in machine learning for an important class of problems in prediction, clustering, image analysis, and data mining. In the first example application for subprime debt portfolio analysis, performance of the neural network provided a 98.4% prediction rate, compared to 33% rate using traditional linear methods. In the second example application regarding credit card debt, performance of the algorithm provided a 95% accurate prediction (in terms of match rate), and is 10% better than other methods we have compared against, primarily logistic regression.


2019 ◽  
Vol 1 (1) ◽  
pp. p8
Author(s):  
Jamilu Auwalu Adamu

One of the objectives of this paper is to incorporate fat-tail effects into, for instance, Sigmoid in order to introduce Transparency and Stability into the existing stochastic Activation Functions. Secondly, according to the available literature reviewed, the existing set of Activation Functions were introduced into the Deep learning Artificial Neural Network through the “Window” not properly through the “Legitimate Door” since they are “Trial and Error “and “Arbitrary Assumptions”, thus, the Author proposed a “Scientific Facts”, “Definite Rules: Jameel’s Stochastic ANNAF Criterion”, and a “Lemma” to substitute not necessarily replace the existing set of stochastic Activation Functions, for instance, the Sigmoid among others. This research is expected to open the “Black-Box” of Deep Learning Artificial Neural networks. The author proposed a new set of advanced optimized fat-tailed Stochastic Activation Functions EMANATED from the AI-ML-Purified Stocks Data  namely; the Log – Logistic (3P) Probability Distribution (1st), Cauchy Probability Distribution (2nd), Pearson 5 (3P) Probability Distribution (3rd), Burr (4P) Probability Distribution (4th), Fatigue Life (3P) Probability Distribution (5th), Inv. Gaussian (3P) Probability Distribution (6th), Dagum (4P) Probability Distribution (7th), and Lognormal (3P) Probability Distribution (8th) for the successful conduct of both Forward and Backward Propagations of Deep Learning Artificial Neural Network. However, this paper did not check the Monotone Differentiability of the proposed distributions. Appendix A, B, and C presented and tested the performances of the stressed Sigmoid and the Optimized Activation Functions using Stocks Data (2014-1991) of Microsoft Corporation (MSFT), Exxon Mobil (XOM), Chevron Corporation (CVX), Honda Motor Corporation (HMC), General Electric (GE), and U.S. Fundamental Macroeconomic Parameters, the results were found fascinating. Thus, guarantee, the first three distributions are excellent Activation Functions to successfully conduct any Stock Deep Learning Artificial Neural Network. Distributions Number 4 to 8 are also good Advanced Optimized Activation Functions. Generally, this research revealed that the Advanced Optimized Activation Functions satisfied Jameel’s ANNAF Stochastic Criterion depends on the Referenced Purified AI Data Set, Time Change and Area of Application which is against the existing “Trial and Error “and “Arbitrary Assumptions” of Sigmoid, Tanh, Softmax, ReLu, and Leaky ReLu.


This chapter is an explanation of artificial neural network (ANN), which is one of the machine learning tools applied for medical purposes. The biological and mathematical definition of neural network is provided and the activation functions effective for processing are listed. Some figures are collected for better understanding.


2018 ◽  
Vol 8 (2) ◽  
pp. 121-132 ◽  
Author(s):  
Esra Akdeniz ◽  
Erol Egrioglu ◽  
Eren Bas ◽  
Ufuk Yolcu

Abstract Real-life time series have complex and non-linear structures. Artificial Neural Networks have been frequently used in the literature to analyze non-linear time series. High order artificial neural networks, in view of other artificial neural network types, are more adaptable to the data because of their expandable model order. In this paper, a new recurrent architecture for Pi-Sigma artificial neural networks is proposed. A learning algorithm based on particle swarm optimization is also used as a tool for the training of the proposed neural network. The proposed new high order artificial neural network is applied to three real life time series data and also a simulation study is performed for Istanbul Stock Exchange data set.


Sign in / Sign up

Export Citation Format

Share Document