scholarly journals Multidimensional wavelet neural networks Based on polynomial powers of sigmoid

DAT Journal ◽  
2016 ◽  
Vol 1 (2) ◽  
pp. 106-123
Author(s):  
João Fernando Marar ◽  
Aron Bordin

Wavelet functions have been used as the activation function in feed forward neural networks. An abundance of R&D has been produced on wavelet neural network area. Some successful algorithms and applications in wavelet neural network have been developed and reported in the literature. However, most of the aforementioned reports impose many restrictions in the classical back propagation algorithm, such as low dimensionality, tensor product of wavelets, parameters initialization, and, in general, the output is one dimensional, etc. In order to remove some of these restrictions, a family of polynomial wavelets generated from powers of sigmoid functions is presented. We described how a multidimensional wavelet neural networks based on these functions can be constructed, trained and applied in pattern recognition tasks. As examples of applications for the method proposed a framework for face verfication is presented.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Florian Stelzer ◽  
André Röhm ◽  
Raul Vicente ◽  
Ingo Fischer ◽  
Serhiy Yanchuk

AbstractDeep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. The network states emerge in time as a temporal unfolding of the neuron’s dynamics. By adjusting the feedback-modulation within the loops, we adapt the network’s connection weights. These connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections must be taken into account. Our approach can fully represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 626
Author(s):  
Svajone Bekesiene ◽  
Rasa Smaliukiene ◽  
Ramute Vaicaitiene

The present study aims to elucidate the main variables that increase the level of stress at the beginning of military conscription service using an artificial neural network (ANN)-based prediction model. Random sample data were obtained from one battalion of the Lithuanian Armed Forces, and a survey was conducted to generate data for the training and testing of the ANN models. Using nonlinearity in stress research, numerous ANN structures were constructed and verified to limit the optimal number of neurons, hidden layers, and transfer functions. The highest accuracy was obtained by the multilayer perceptron neural network (MLPNN) with a 6-2-2 partition. A standardized rescaling method was used for covariates. For the activation function, the hyperbolic tangent was used with 20 units in one hidden layer as well as the back-propagation algorithm. The best ANN model was determined as the model that showed the smallest cross-entropy error, the correct classification rate, and the area under the ROC curve. These findings show, with high precision, that cohesion in a team and adaptation to military routines are two critical elements that have the greatest impact on the stress level of conscripts.


Author(s):  
Şahin Yildirim ◽  
Sertaç Savaş

The goal of this chapter is to enable a nonholonomic mobile robot to track a specified trajectory with minimum tracking error. Towards that end, an adaptive P controller is designed whose gain parameters are tuned by using two feed-forward neural networks. Back-propagation algorithm is chosen for online learning process and posture-tracking errors are considered as error values for adjusting weights of neural networks. The tracking performance of the controller is illustrated for different trajectories with computer simulation using Matlab/Simulink. In addition, open-loop response of an experimental mobile robot is investigated for these different trajectories. Finally, the performance of the proposed controller is compared to a standard PID controller. The simulation results show that “adaptive P controller using neural networks” has superior tracking performance at adapting large disturbances for the mobile robot.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jingwei Liu ◽  
Peixuan Li ◽  
Xuehan Tang ◽  
Jiaxin Li ◽  
Jiaming Chen

AbstractArtificial neural networks (ANN) which include deep learning neural networks (DNN) have problems such as the local minimal problem of Back propagation neural network (BPNN), the unstable problem of Radial basis function neural network (RBFNN) and the limited maximum precision problem of Convolutional neural network (CNN). Performance (training speed, precision, etc.) of BPNN, RBFNN and CNN are expected to be improved. Main works are as follows: Firstly, based on existing BPNN and RBFNN, Wavelet neural network (WNN) is implemented in order to get better performance for further improving CNN. WNN adopts the network structure of BPNN in order to get faster training speed. WNN adopts the wavelet function as an activation function, whose form is similar to the radial basis function of RBFNN, in order to solve the local minimum problem. Secondly, WNN-based Convolutional wavelet neural network (CWNN) method is proposed, in which the fully connected layers (FCL) of CNN is replaced by WNN. Thirdly, comparative simulations based on MNIST and CIFAR-10 datasets among the discussed methods of BPNN, RBFNN, CNN and CWNN are implemented and analyzed. Fourthly, the wavelet-based Convolutional Neural Network (WCNN) is proposed, where the wavelet transformation is adopted as the activation function in Convolutional Pool Neural Network (CPNN) of CNN. Fifthly, simulations based on CWNN are implemented and analyzed on the MNIST dataset. Effects are as follows: Firstly, WNN can solve the problems of BPNN and RBFNN and have better performance. Secondly, the proposed CWNN can reduce the mean square error and the error rate of CNN, which means CWNN has better maximum precision than CNN. Thirdly, the proposed WCNN can reduce the mean square error and the error rate of CWNN, which means WCNN has better maximum precision than CWNN.


2020 ◽  
Vol 34 (15) ◽  
pp. 2050161
Author(s):  
Vipin Tiwari ◽  
Ashish Mishra

This paper designs a novel classification hardware framework based on neural network (NN). It utilizes COordinate Rotation DIgital Computer (CORDIC) algorithm to implement the activation function of NNs. The training was performed through software using an error back-propagation algorithm (EBPA) implemented in C++, then the final weights were loaded to the implemented hardware framework to perform classification. The hardware framework is developed in Xilinx 9.2i environment using VHDL as programming languages. Classification tests are performed on benchmark datasets obtained from UCI machine learning data repository. The results are compared with competitive classification approaches by considering the same datasets. Extensive analysis reveals that the proposed hardware framework provides more efficient results as compared to the existing classifiers.


Author(s):  
Xiaoqiang Wen ◽  
Shuguang Jian

In this paper, two wavelet neural network (WNN) frames which depend on Morlet wavelet function and Gaussian wavelet function were established. In order to improve the efficiency of model training, the momentum term was applied to modify the weights and thresholds, and the output of the network was summed up by function transformation of output layer nodes. When the Gaussian Wavelet Neural Networks (GWNN) and Morlet Wavelet Neural Networks (MWNN) were applied to coal consumption rate (CCR) estimation in a thermal power plant, the results confirmed their potency in function approximation. In addition, the influence of learning rate on the models was also discussed through the orthogonal experiment.


2017 ◽  
Vol 43 (4) ◽  
pp. 26-32 ◽  
Author(s):  
Sinan Mehmet Turp

AbstractThis study investigates the estimated adsorption efficiency of artificial Nickel (II) ions with perlite in an aqueous solution using artificial neural networks, based on 140 experimental data sets. Prediction using artificial neural networks is performed by enhancing the adsorption efficiency with the use of Nickel (II) ions, with the initial concentrations ranging from 0.1 mg/L to 10 mg/L, the adsorbent dosage ranging from 0.1 mg to 2 mg, and the varying time of effect ranging from 5 to 30 mins. This study presents an artificial neural network that predicts the adsorption efficiency of Nickel (II) ions with perlite. The best algorithm is determined as a quasi-Newton back-propagation algorithm. The performance of the artificial neural network is determined by coefficient determination (R2), and its architecture is 3-12-1. The prediction shows that there is an outstanding relationship between the experimental data and the predicted values.


Author(s):  
M. HARLY ◽  
I. N. SUTANTRA ◽  
H. P. MAURIDHI

Fixed order neural networks (FONN), such as high order neural network (HONN), in which its architecture is developed from zero order of activation function and joint weight, regulates only the number of weight and their value. As a result, this network only produces a fixed order model or control level. These obstacles, which affect preceeding architectures, have been performing finite ability to adapt uncertainty character of real world plant, such as driving dynamics and its desired control performance. This paper introduces a new concept of neural network neuron. In this matter, exploiting discrete z-function builds new neuron activation. Instead of zero order joint weight matrices, the discrete z-function weight matrix will be provided to realize uncertainty or undetermined real word plant and desired adaptive control system that their order has probably been changing. Instead of using bias, an initial condition value is developed. Neural networks using new neurons is called Varied Order Neural Network (VONN). For optimization process, updating order, coefficient and initial value of node activation function uses GA; while updating joint weight, it applies both back propagation (combined LSE-gauss Newton) and NPSO. To estimate the number of hidden layer, constructive back propagation (CBP) was also applied. Thorough simulation was conducted to compare the control performance between FONN and MONN. In order to control, vehicle stability was equipped by electronics stability program (ESP), electronics four wheel steering (4-EWS), and active suspension (AS). 2000, 4000, 6000, 8000 data that are from TODS, a hidden layer, 3 input nodes, 3 output nodes were provided to train and test the network of both the uncertainty model and its adaptive control system. The result of simulation, therefore, shows that stability parameter such as yaw rate error, vehicle side slip error, and rolling angle error produces better performance control in the form of smaller performance index using FDNN than those using MONN.


2011 ◽  
Vol 219-220 ◽  
pp. 1077-1080
Author(s):  
Dong Yan Cui ◽  
Zai Xing Xie

In this paper, the integration of wavelet neural network fault diagnosis system is established based on information fusion technology. the effective combination of fault characteristic information proves that integration of wavelet neural networks make better use of a variety of characteristic information than the list of wavelet neural networks to solve difficulties and problems which are difficult to resolve by a single network.


Author(s):  
Maria Sivak ◽  
◽  
Vladimir Timofeev ◽  

The paper considers the problem of building robust neural networks using different robust loss functions. Applying such neural networks is reasonably when working with noisy data, and it can serve as an alternative to data preprocessing and to making neural network architecture more complex. In order to work adequately, the error back-propagation algorithm requires a loss function to be continuously or two-times differentiable. According to this requirement, two five robust loss functions were chosen (Andrews, Welsch, Huber, Ramsey and Fair). Using the above-mentioned functions in the error back-propagation algorithm instead of the quadratic one allows obtaining an entirely new class of neural networks. For investigating the properties of the built networks a number of computational experiments were carried out. Different values of outliers’ fraction and various numbers of epochs were considered. The first step included adjusting the obtained neural networks, which lead to choosing such values of internal loss function parameters that resulted in achieving the highest accuracy of a neural network. To determine the ranges of parameter values, a preliminary study was pursued. The results of the first stage allowed giving recommendations on choosing the best parameter values for each of the loss functions under study. The second stage dealt with comparing the investigated robust networks with each other and with the classical one. The analysis of the results shows that using the robust technique leads to a significant increase in neural network accuracy and in a learning rate.


Sign in / Sign up

Export Citation Format

Share Document