DYSLEXIC BEHAVIOUR OF FEEDFORWARD NEURAL NETWORKS

1990 ◽  
Vol 01 (03) ◽  
pp. 237-245 ◽  
Author(s):  
Edgardo A. Ferrán ◽  
Roberto P. J. Perazzo

A model is proposed in which the synaptic efficacies of a feedforward neural network are adapted with a cost function that vanishes if the boolean function that is represented has the same symmetry properties as the target one. The function chosen according to this procedure is thus taken as an archetype of the whole symmetry class. Several examples are presented showing how this type of partial learning can produce a behaviour of the net that is reminiscent of that of dyslexic persons.

1997 ◽  
Vol 9 (1) ◽  
pp. 185-204 ◽  
Author(s):  
Rudy Setiono

This article proposes the use of a penalty function for pruning feedforward neural network by weight elimination. The penalty function proposed consists of two terms. The first term is to discourage the use of unnecessary connections, and the second term is to prevent the weights of the connections from taking excessively large values. Simple criteria for eliminating weights from the network are also given. The effectiveness of this penalty function is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems. The resulting pruned networks obtained for many of these problems have fewer connections than previously reported in the literature.


2007 ◽  
Vol 16 (01) ◽  
pp. 111-120 ◽  
Author(s):  
MANISH MANGAL ◽  
MANU PRATAP SINGH

This paper describes the application of two evolutionary algorithms to the feedforward neural networks used in classification problems. Besides of a simple backpropagation feedforward algorithm, the paper considers the genetic algorithm and random search algorithm. The objective is to analyze the performance of GAs over the simple backpropagation feedforward in terms of accuracy or speed in this problem. The experiments considered a feedforward neural network trained with genetic algorithm/random search algorithm and 39 types of network structures and artificial data sets. In most cases, the evolutionary feedforward neural networks seemed to have better of equal accuracy than the original backpropagation feedforward neural network. We found few differences in the accuracy of the networks solved by applying the EAs, but found ample differences in the execution time. The results suggest that the evolutionary feedforward neural network with random search algorithm might be the best algorithm on the data sets we tested.


1999 ◽  
Vol 121 (4) ◽  
pp. 724-729 ◽  
Author(s):  
C. James Li ◽  
Yimin Fan

This paper describes a method to diagnose the most frequent faults of a screw compressor and assess magnitude of these faults by tracking changes in compressor’s dynamics. To determine the condition of the compressor, a feedforward neural network model is first employed to identify the dynamics of the compressor. A recurrent neural network is then used to classify the model into one of the three conditions including baseline, gaterotor wear and excessive friction. Finally, another recurrent neural network estimates the magnitude of a fault from the model. The method’s ability to generalize was evaluated. Experimental validation of the method was also performed. The results show significant improvement over the previous method which used only feedforward neural networks.


2018 ◽  
Vol 8 (4) ◽  
pp. 20180011 ◽  
Author(s):  
Junkyung Kim ◽  
Matthew Ricci ◽  
Thomas Serre

The advent of deep learning has recently led to great successes in various engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural network, now approach human accuracy on visual recognition tasks like image classification and face recognition. However, here we will show that feedforward neural networks struggle to learn abstract visual relations that are effortlessly recognized by non-human primates, birds, rodents and even insects. We systematically study the ability of feedforward neural networks to learn to recognize a variety of visual relations and demonstrate that same–different visual relations pose a particular strain on these networks. Networks fail to learn same–different visual relations when stimulus variability makes rote memorization difficult. Further, we show that learning same–different problems becomes trivial for a feedforward network that is fed with perceptually grouped stimuli. This demonstration and the comparative success of biological vision in learning visual relations suggests that feedback mechanisms such as attention, working memory and perceptual grouping may be the key components underlying human-level abstract visual reasoning.


2005 ◽  
Vol 15 (05) ◽  
pp. 323-338 ◽  
Author(s):  
RALF KRETZSCHMAR ◽  
NICOLAOS B. KARAYIANNIS ◽  
FRITZ EGGIMANN

This paper proposes a framework for training feedforward neural network models capable of handling class overlap and imbalance by minimizing an error function that compensates for such imperfections of the training set. A special case of the proposed error function can be used for training variance-controlled neural networks (VCNNs), which are developed to handle class overlap by minimizing an error function involving the class-specific variance (CSV) computed at their outputs. Another special case of the proposed error function can be used for training class-balancing neural networks (CBNNs), which are developed to handle class imbalance by relying on class-specific correction (CSC). VCNNs and CBNNs are compared with conventional feedforward neural networks (FFNNs), quantum neural networks (QNNs), and resampling techniques. The properties of VCNNs and CBNNs are illustrated by experiments on artificial data. Various experiments involving real-world data reveal the advantages offered by VCNNs and CBNNs in the presence of class overlap and class imbalance.


2019 ◽  
Author(s):  
Md. Shoaibur Rahman

AbstractThis article presents an overview of the generalized formulations of the computations, optimization, and tuning of a deep feedforward neural network. A small network has been used to systematically explain the computing steps, which were then used to establish the generalized forms of the computations in forward and backward propagations for larger networks. Additionally, some of the commonly used cost functions, activation functions, optimization algorithms, and hyper-parameters tuning approaches have been discussed.


1992 ◽  
Vol 03 (03) ◽  
pp. 291-299 ◽  
Author(s):  
MO-YUEN CHOW ◽  
SUE OI YEE

The relative robustness of artificial neural networks subject to small input perturbations (e.g. measurement noises) is an important issue in real world applications. This paper uses the concept of input-output sensitivity analysis to derive a relative network robustness measure for different feedforward neural network configurations. For illustration purposes, this measure is used to compare different neural network configurations designed for detecting incipient faults in induction motors. Analytical and simulation results are presented to show that the relative network robustness measure derived in this paper is an effective indicator of the relative performance of different feedforward neural network configurations in noisy environments and that this measure should be considered in the design of neural networks for real time applications. The concept of input-output sensitivity analysis and relative network robustness measure presented can be extended to analyze other neural networks designed for on-line applications.


1992 ◽  
Vol 26 (9-11) ◽  
pp. 2461-2464 ◽  
Author(s):  
R. D. Tyagi ◽  
Y. G. Du

A steady-statemathematical model of an activated sludgeprocess with a secondary settler was developed. With a limited number of training data samples obtained from the simulation at steady state, a feedforward neural network was established which exhibits an excellent capability for the operational prediction and determination.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


2002 ◽  
Vol 12 (01) ◽  
pp. 31-43 ◽  
Author(s):  
GARY YEN ◽  
HAIMING LU

In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.


Sign in / Sign up

Export Citation Format

Share Document