Rational approximation techniques for analysis of neural networks

1994 ◽  
Vol 40 (2) ◽  
pp. 455-466 ◽  
Author(s):  
Kai-Yeung Siu ◽  
V.P. Roychowdhury ◽  
T. Kailath
Algorithms ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 63 ◽  
Author(s):  
Krzysztof Ropiak ◽  
Piotr Artiemjew

The set of heuristics constituting the methods of deep learning has proved very efficient in complex problems of artificial intelligence such as pattern recognition, speech recognition, etc., solving them with better accuracy than previously applied methods. Our aim in this work has been to integrate the concept of the rough set to the repository of tools applied in deep learning in the form of rough mereological granular computing. In our previous research we have presented the high efficiency of our decision system approximation techniques (creating granular reflections of systems), which, with a large reduction in the size of the training systems, maintained the internal knowledge of the original data. The current research has led us to the question whether granular reflections of decision systems can be effectively learned by neural networks and whether the deep learning will be able to extract the knowledge from the approximated decision systems. Our results show that granulated datasets perform well when mined by deep learning tools. We have performed exemplary experiments using data from the UCI repository—Pytorch and Tensorflow libraries were used for building neural network and classification process. It turns out that deep learning method works effectively based on reduced training sets. Approximation of decision systems before neural networks learning can be important step to give the opportunity to learn in reasonable time.


Author(s):  
Ansgar Rössig ◽  
Milena Petkovic

Abstract We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behaviour of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu system and the MNIST handwritten digit data set. The results indicate that our approach is very competitive with others, i.e. it outperforms the solvers of Bunel et al. (in: Bengio, Wallach, Larochelle, Grauman, Cesa-Bianchi, Garnett (eds) Advances in neural information processing systems (NIPS 2018), 2018) and Reluplex (Katz et al. in: Computer aided verification—29th international conference, CAV 2017, Heidelberg, Germany, July 24–28, 2017, Proceedings, 2017). In comparison to the solvers ReluVal (Wang et al. in: 27th USENIX security symposium (USENIX Security 18), USENIX Association, Baltimore, 2018a) and Neurify (Wang et al. in: 32nd Conference on neural information processing systems (NIPS), Montreal, 2018b), the number of necessary branchings is much smaller. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5148
Author(s):  
Mohieddine Benammar ◽  
Abdulrahman Alassi ◽  
Adel Gastli ◽  
Lazhar Ben-Brahim ◽  
Farid Touati

Fast and accurate arctangent approximations are used in several contemporary applications, including embedded systems, signal processing, radar, and power systems. Three main approximation techniques are well-established in the literature, varying in their accuracy and resource utilization levels. Those are the iterative coordinate rotational digital computer (CORDIC), the lookup tables (LUTs)-based, and the rational formulae techniques. This paper presents a novel technique that combines the advantages of both rational formulae and LUT approximation methods. The new algorithm exploits the pseudo-linear region around the tangent function zero point to estimate a reduced input arctangent through a modified rational approximation before referring this estimate to its original value using miniature LUTs. A new 2nd order rational approximation formula is introduced for the first time in this work and benchmarked against existing alternatives as it improves the new algorithm performance. The eZDSP-F28335 platform has been used for practical implementation and results validation of the proposed technique. The contributions of this work are summarized as follows: (1) introducing a new approximation algorithm with high precision and application-based flexibility; (2) introducing a new rational approximation formula that outperforms literature alternatives with the algorithm at higher accuracy requirement; and (3) presenting a practical evaluation index for rational approximations in the literature.


Author(s):  
Siddhartha Bhattacharyya ◽  
Ujjwal Maulik ◽  
Sanghamitra Bandyopadhyay

Soft Computing is a relatively new computing paradigm bestowed with tools and techniques for handling real world problems. The main components of this computing paradigm are neural networks, fuzzy logic and evolutionary computation. Each and every component of the soft computing paradigm operates either independently or in coalition with the other components for addressing problems related to modeling, analysis and processing of data. An overview of the essentials and applications of the soft computing paradigm is presented in this chapter with reference to the functionalities and operations of its constituent components. Neural networks are made up of interconnected processing nodes/neurons, which operate on numeric data. These networks posses the capabilities of adaptation and approximation. The varied amount of uncertainty and ambiguity in real world data are handled in a linguistic framework by means of fuzzy sets and fuzzy logic. Hence, this component is efficient in understanding vagueness and imprecision in real world knowledge bases. Genetic algorithms, simulated annealing algorithm and ant colony optimization algorithm are representative evolutionary computation techniques, which are efficient in deducing an optimum solution to a problem, thanks to the inherent exhaustive search methodologies adopted. Of late, rough sets have evolved to improve upon the performances of either of these components by way of approximation techniques. These soft computing techniques have been put to use in wide variety of problems ranging from scientific to industrial applications. Notable among these applications include image processing, pattern recognition, Kansei information processing, data mining, web intelligence etc.


Data Mining ◽  
2013 ◽  
pp. 366-394
Author(s):  
Siddhartha Bhattacharyya ◽  
Ujjwal Maulik ◽  
Sanghamitra Bandyopadhyay

Soft Computing is a relatively new computing paradigm bestowed with tools and techniques for handling real world problems. The main components of this computing paradigm are neural networks, fuzzy logic and evolutionary computation. Each and every component of the soft computing paradigm operates either independently or in coalition with the other components for addressing problems related to modeling, analysis and processing of data. An overview of the essentials and applications of the soft computing paradigm is presented in this chapter with reference to the functionalities and operations of its constituent components. Neural networks are made up of interconnected processing nodes/neurons, which operate on numeric data. These networks posses the capabilities of adaptation and approximation. The varied amount of uncertainty and ambiguity in real world data are handled in a linguistic framework by means of fuzzy sets and fuzzy logic. Hence, this component is efficient in understanding vagueness and imprecision in real world knowledge bases. Genetic algorithms, simulated annealing algorithm and ant colony optimization algorithm are representative evolutionary computation techniques, which are efficient in deducing an optimum solution to a problem, thanks to the inherent exhaustive search methodologies adopted. Of late, rough sets have evolved to improve upon the performances of either of these components by way of approximation techniques. These soft computing techniques have been put to use in wide variety of problems ranging from scientific to industrial applications. Notable among these applications include image processing, pattern recognition, Kansei information processing, data mining, web intelligence etc.


1994 ◽  
Vol 02 (03) ◽  
pp. 247-281 ◽  
Author(s):  
H. ABDI

Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.


Sign in / Sign up

Export Citation Format

Share Document