ON THE CLASSIFICATION CAPABILITY OF A DYNAMIC THRESHOLD NEURAL NETWORK

1994 ◽  
Vol 05 (02) ◽  
pp. 103-114
Author(s):  
CHENG-CHIN CHIANG ◽  
HSIN-CHIA FU

This paper proposes a new type of neural network called the Dynamic Threshold Neural Network (DTNN) which is theoretically and experimentally superior to a conventional sigmoidal multilayer neural network in classification capability, Given a training set containing 4k+1 patterns in ℜn, to successfully learn this training set, the upper bound on the number of free parameters for a DTNN is (k+1)(n+2)+2(k +1), while the upper bound for a sigmoidal network is 2k(n+1)+(2k+1). We also derive a learning algorithm for the DTNN in a similar way to the derivation of the backprop learning algorithm. In simulations on learning the Two-Spirals problem, our DTNN with 30 neurons in one hidden layer takes only 3200 epochs on average to successfully learn the whole training set, while the single-hidden-layer feedforward sigmoidal neural networks have never been reported to successfully learn the given training set even though more hidden neurons are used.

2013 ◽  
Vol 765-767 ◽  
pp. 1854-1857
Author(s):  
Feng Wang ◽  
Jin Lin Ding ◽  
Hong Sun

Neural network generalized inverse (NNGI) can realize two-motor synchronous decoupling control, but traditional neural network (NN) exists many shortcomings, Regular extreme learning machine (RELM) has fast learning and good generalization ability, which is an ideal approach to approximate inverse system. But it is difficult to accurately give the reasonable number of hidden neurons. Improved incremental RELM(IIRELM) is prospected on the basis of analyzing RELM learning algorithm, which can automatically determine optimal network structure through gradually adding new hidden-layer neurons, and prediction model based on IIRELM is applied in two-motor closed-loop control based on NNGI, the decoupling control between velocity and tension is realized. The experimental results proved that the system has excellent performance.


2016 ◽  
Vol 5 (4) ◽  
pp. 126 ◽  
Author(s):  
I MADE DWI UDAYANA PUTRA ◽  
G. K. GANDHIADI ◽  
LUH PUTU IDA HARINI

Weather information has an important role in human life in various fields, such as agriculture, marine, and aviation. The accurate weather forecasts are needed in order to improve the performance of various fields. In this study, use artificial neural network method with backpropagation learning algorithm to create a model of weather forecasting in the area of ??South Bali. The aim of this study is to determine the effect of the number of neurons in the hidden layer and to determine the level of accuracy of the method of artificial neural network with backpropagation learning algorithm in weather forecast models. Weather forecast models in this study use input of the factors that influence the weather, namely air temperature, dew point, wind speed, visibility, and barometric pressure.The results of testing the network with a different number of neurons in the hidden layer of artificial neural network method with backpropagation learning algorithms show that the increase in the number of neurons in the hidden layer is not directly proportional to the value of the accuracy of the weather forecasts, the increase in the number of neurons in the hidden layer does not necessarily increase or decrease value accuracy of weather forecasts we obtain the best accuracy rate of 51.6129% on a network model with three neurons in the hidden layer.


2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Shao Jie ◽  
Wang Li ◽  
Zhao WeiSong ◽  
Zhong YaQin ◽  
Reza Malekian

A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance.


1994 ◽  
Vol 05 (01) ◽  
pp. 67-75 ◽  
Author(s):  
BYOUNG-TAK ZHANG

Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms.


1993 ◽  
pp. 47-56
Author(s):  
Mohamed Othman ◽  
Mohd. Hassan Selamat ◽  
Zaiton Muda ◽  
Lili Norliya Abdullah

This paper discusses the modeling of Tower of Hanoi using the concepts of neural network. The basis idea of backpropagation learning algorithm in Artificial Neural Systems is then described. While similar in some ways, Artificial Neural System learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connection in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable qf reproducing the desired function within the given network. Key words: Tower of Hanoi; Backpropagation Algorithm; Knowledge Representation;


1992 ◽  
Vol 03 (01) ◽  
pp. 19-30 ◽  
Author(s):  
AKIRA NAMATAME ◽  
YOSHIAKI TSUKAMOTO

We propose a new learning algorithm, structural learning with the complementary coding for concept learning problems. We introduce the new grouping measure that forms the similarity matrix over the training set and show this similarity matrix provides a sufficient condition for the linear separability of the set. Using the sufficient condition one should figure out a suitable composition of linearly separable threshold functions that classify exactly the set of labeled vectors. In the case of the nonlinear separability, the internal representation of connectionist networks, the number of the hidden units and value-space of these units, is pre-determined before learning based on the structure of the similarity matrix. A three-layer neural network is then constructed where each linearly separable threshold function is computed by a linear-threshold unit whose weights are determined by the one-shot learning algorithm that requires a single presentation of the training set. The structural learning algorithm proceeds to capture the connection weights so as to realize the pre-determined internal representation. The pre-structured internal representation, the activation value spaces at the hidden layer, defines intermediate-concepts. The target-concept is then learned as a combination of those intermediate-concepts. The ability to create the pre-structured internal representation based on the grouping measure distinguishes the structural learning from earlier methods such as backpropagation.


2014 ◽  
Vol 556-562 ◽  
pp. 6081-6084
Author(s):  
Qian Huang ◽  
Wen Long Li ◽  
Jian Kang ◽  
Jun Yang

In this paper, based on the study analyzed on the basis of a variety of neural networks, a kind of new type pulse neural network is implemented based on the FPGA [1]. The neural network adopts the Sigmoid function as its hidden layer nonlinear excitation function, at the same time, to reduce ROM table storage space and improve the efficiency of look-up table [2], it also adopts the STAM algorithm based nonlinear storage. Choose Altera Corporation’s EDA tools Quartus II as compilation, simulation platform, Cyclone II series EP2C20F484C6 devices and realized the pulse neural networks finally. In the last, we use XOR problem as example to carry out the hardware simulation, and simulation results are consistent with the theoretical value. Neural network to improve the complex, nonlinear, time-varying, uncertainty about the system reliability and security provides a new way.


Author(s):  
JIANJUN WANG ◽  
WEIHUA XU ◽  
BIN ZOU

For the three-layer artificial neural networks with trigonometric weights coefficients, the upper bound and lower bound of approximating 2π-periodic pth-order Lebesgue integrable functions [Formula: see text] are obtained in this paper. Theorems we obtained provide explicit equational representations of these approximating networks, the specification for their numbers of hidden-layer units, the lower bound estimation of approximation, and the essential order of approximation. The obtained results not only characterize the intrinsic property of approximation of neural networks, but also uncover the implicit relationship between the precision (speed) and the number of hidden neurons of neural networks.


Author(s):  
Untari Novia Wisesty

The eye state detection is one of various task toward Brain Computer Interface system. The eye state can be read in brain signals. In this paper use EEG Eye State dataset (Rosler, 2013) from UCI Machine Learning Repository Database. Dataset is consisting of continuous 14 EEG measurements in 117 seconds. The eye states were marked as “1” or “0”. “1” indicates the eye-closed and “0” the eye-open state. The proposed schemes use Multi Layer Neural Network with Levenberg Marquardt optimization learning algorithm, as classification method.  Levenberg Marquardt method used to optimize the learning algorithm of neural network, because the standard algorithm has a weak convergence rate. It is need many iterations to have minimum error. Based on the analysis towards the experiment on the EEG dataset, it can be concluded that the proposed scheme can be implemented to detect the Eye State. The best accuracy gained from combination variable sigmoid function, data normalization and number of neurons are 31 (95.71%) for one hidden layer, and 98.912% for two hidden layers with number of neurons are 39 and 47 neurons and linear function.


Sign in / Sign up

Export Citation Format

Share Document