Complexity Analysis of Neural Network Based on Rational Spline Weight Functions

2014 ◽  
Vol 644-650 ◽  
pp. 1658-1661
Author(s):  
Dai Yuan Zhang ◽  
Hai Nan Yang

This paper aims to obtain the time complexity for a new kind of neural network using rational spline weight functions. In this paper, we introduce the architecture of the neural network, and analyze the time complexity in detail. Finally, some examples are also given to verify the theoretical analysis. The results show that the time complexity depends on the number of patterns, the input and out dimensions of the neural networks.

2015 ◽  
Vol 713-715 ◽  
pp. 1708-1711
Author(s):  
Dai Yuan Zhang ◽  
Shan Jiang Hou

As we all known, artificial neural network can be used in the process of environmental quality assessment. To improve the accuracy and science of assessment, a method of environmental quality assessment is presented in this paper, which is based on spline weight function (SWF) neural networks. The weigh functions of the neural network are composed of rational spline functions with cubic numerator and linear denominator (3/1 rational SWF). The simulation results show that, compared with the conventional BP neural networks, this method can get very high precision and accuracy. This case demonstrates that SWF neural networks can offer a very prospective tool for environmental quality assessment.


2014 ◽  
Vol 989-994 ◽  
pp. 4437-4440 ◽  
Author(s):  
Dai Yuan Zhang ◽  
Jian Hui Zhan

To describe the performances of a new kind of neural network, the complexity for training neural network using orthogonal weight functions is analysed. The full adders are used as the neurons of the neural networks, and the weight functions are orthogonal functions. We derive the relationships of the iteration time with the number of input dimensions, output dimensions and training patterns. Finally, some simulation examples verified the theoretical results obtained in this paper.


2014 ◽  
Vol 989-994 ◽  
pp. 2659-2662
Author(s):  
Dai Yuan Zhang ◽  
Ran Zhao

Weight function neural network is a new kind of neural network developed in recent years, which has many advantages, such as finding globe minima directly, good performance of generalization, extracting some useful information inherent in the problems and so on. Time complexity is an important measure of algorithm. This paper studies the complexity of neural network using second class orthogonal weight functions. The results indicate that the neural network has a linear relationship with the dimensions of input layer and output layer, an O (n3) relationship with the number of samples. Finally gives some simulation experiments for time complexity.


2019 ◽  
Vol 2019 (02) ◽  
pp. 89-98
Author(s):  
Vijayakumar T

Predicting the category of tumors and the types of the cancer in its early stage remains as a very essential process to identify depth of the disease and treatment available for it. The neural network that functions similar to the human nervous system is widely utilized in the tumor investigation and the cancer prediction. The paper presents the analysis of the performance of the neural networks such as the, FNN (Feed Forward Neural Networks), RNN (Recurrent Neural Networks) and the CNN (Convolutional Neural Network) investigating the tumors and predicting the cancer. The results obtained by evaluating the neural networks on the breast cancer Wisconsin original data set shows that the CNN provides 43 % better prediction than the FNN and 25% better prediction than the RNN.


Author(s):  
Daniela Danciu

Neural networks—both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established via learning. The chapter deals with the second kind of dynamics. More precisely, since the emergent computational capabilities of a recurrent neural network can be achieved provided it has suitable dynamical properties when viewed as a system with several equilibria, the chapter deals with those qualitative properties connected to the achievement of such dynamical properties as global asymptotics and gradient-like behavior. In the case of the neural networks with delays, these aspects are reformulated in accordance with the state of the art of the theory of time delay dynamical systems.


2007 ◽  
Vol 11 (6) ◽  
pp. 1883-1896 ◽  
Author(s):  
A. Piotrowski ◽  
S. G. Wallis ◽  
J. J. Napiórkowski ◽  
P. M. Rowiński

Abstract. The prediction of temporal concentration profiles of a transported pollutant in a river is still a subject of ongoing research efforts worldwide. The present paper is aimed at studying the possibility of using Multi-Layer Perceptron Neural Networks to evaluate the whole concentration versus time profile at several cross-sections of a river under various flow conditions, using as little information about the river system as possible. In contrast with the earlier neural networks based work on longitudinal dispersion coefficients, this new approach relies more heavily on measurements of concentration collected during tracer tests over a range of flow conditions, but fewer hydraulic and morphological data are needed. The study is based upon 26 tracer experiments performed in a small river in Edinburgh, UK (Murray Burn) at various flow rates in a 540 m long reach. The only data used in this study were concentration measurements collected at 4 cross-sections, distances between the cross-sections and the injection site, time, as well as flow rate and water velocity, obtained according to the data measured at the 1st and 2nd cross-sections. The four main features of concentration versus time profiles at a particular cross-section, namely the peak concentration, the arrival time of the peak at the cross-section, and the shapes of the rising and falling limbs of the profile are modeled, and for each of them a separately designed neural network was used. There was also a variant investigated in which the conservation of the injected mass was assured by adjusting the predicted peak concentration. The neural network methods were compared with the unit peak attenuation curve concept. In general the neural networks predicted the main features of the concentration profiles satisfactorily. The predicted peak concentrations were generally better than those obtained using the unit peak attenuation method, and the method with mass-conservation assured generally performed better than the method that did not account for mass-conservation. Predictions of peak travel time were also better using the neural networks than the unit peak attenuation method. Including more data into the neural network training set clearly improved the prediction of the shapes of the concentration profiles. Similar improvements in peak concentration were less significant and the travel time prediction appeared to be largely unaffected.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Fereshteh Mataeimoghadam ◽  
M. A. Hakim Newton ◽  
Abdollah Dehzangi ◽  
Abdul Karim ◽  
B. Jayaram ◽  
...  

Abstract Protein structure prediction is a grand challenge. Prediction of protein structures via the representations using backbone dihedral angles has recently achieved significant progress along with the on-going surge of deep neural network (DNN) research in general. However, we observe that in the protein backbone angle prediction research, there is an overall trend to employ more and more complex neural networks and then to throw more and more features to the neural networks. While more features might add more predictive power to the neural network, we argue that redundant features could rather clutter the scenario and more complex neural networks then just could counterbalance the noise. From artificial intelligence and machine learning perspectives, problem representations and solution approaches do mutually interact and thus affect performance. We also argue that comparatively simpler predictors can more easily be reconstructed than the more complex ones. With these arguments in mind, we present a deep learning method named Simpler Angle Predictor (SAP) to train simpler DNN models that enhance protein backbone angle prediction. We then empirically show that SAP can significantly outperform existing state-of-the-art methods on well-known benchmark datasets: for some types of angles, the differences are 6–8 in terms of mean absolute error (MAE). The SAP program along with its data is available from the website https://gitlab.com/mahnewton/sap.


2015 ◽  
Vol 713-715 ◽  
pp. 1716-1720
Author(s):  
Dai Yuan Zhang ◽  
Lei Lei Wang

In order to describe the generalization ability, this paper discusses the error analysis of neural network with multiply neurons using rational spline weight functions. We use the cubic numerator polynomial and linear denominator polynomial as the rational splines for weight functions. We derive the error formula for approximation, the results can be used to algorithms for training neural networks.


Author(s):  
V. N. Gridin ◽  
I. A. Evdokimov ◽  
B. R. Salem ◽  
V. I. Solodovnikov

The analysis of key stages, implementation features and functioning principles of the neural networks, including deep neural networks, has been carried out. The problems of choosing the number of hidden elements, methods for the internal topology selection and setting parameters are considered. It is shown that in the training and validation process it is possible to control the capacity of a neural network and evaluate the qualitative characteristics of the constructed model. The issues of construction processes automation and hyperparameters optimization of the neural network structures are considered depending on the user's tasks and the available source data. A number of approaches based on the use of probabilistic programming, evolutionary algorithms, and recurrent neural networks are presented.


2001 ◽  
Vol 11 (06) ◽  
pp. 561-572 ◽  
Author(s):  
ROSELI A. FRANCELIN ROMERO ◽  
JANUSZ KACPRYZK ◽  
FERNANDO GOMIDE

An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.


Sign in / Sign up

Export Citation Format

Share Document