Emergence of Learning Rule in Neural Networks Using Genetic Programming Combined with Decision Trees

Author(s):  
Noboru Matsumoto ◽  
◽  
Kenneth J. Mackin ◽  
Eiichiro Tazaki

Genetic Programming (GP) combined with Decision Trees is used to evolve the structure and weights for Artificial Neural Networks (ANN). The learning rule of the decision tree is defined as a function of global information using a divide-and-conquer strategy. Learning rules with lower fitness values are replaced by new ones generated by GP techniques. The reciprocal connection between decision tree and GP emerges from the coordination of learning rules. Since there is no constraint on initial network, a more suitable network is found for a given task. Fitness values are improved using a Hybrid GP technique combining GP and Back Propagation. The proposed method is applied to medical diagnosis and results demonstrate that effective learning rules evolve.

1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


Author(s):  
HENRY WAI-KIT CHIA ◽  
CHEW-LIM TAN

Neural Logic Networks or Neulonets are hybrids of neural networks and expert systems capable of representing complex human logic in decision making. Each neulonet is composed of rudimentary net rules which themselves depict a wide variety of fundamental human logic rules. An early methodology employed in neulonet learning for pattern classification involved weight adjustments during back-propagation training which ultimately rendered the net rules incomprehensible. A new technique is now developed that allows the neulonet to learn by composing the net rules using genetic programming without the need to impose weight modifications, thereby maintaining the inherent logic of the net rules. Experimental results are presented to illustrate this new and exciting capability in capturing human decision logic from examples. The extraction and analysis of human logic net rules from an evolved neulonet will be discussed. These extracted net rules will be shown to provide an alternate perspective to the greater extent of knowledge that can be expressed and discovered. Comparisons will also be made to demonstrate the added advantage of using net rules, against the use of standard boolean logic of negation, disjunction and conjunction, in the realm of evolutionary computation.


2019 ◽  
Author(s):  
David Rotermund ◽  
Klaus R. Pawelzik

ABSTRACTNeural networks are important building blocks in technical applications. These artificial neural networks (ANNs) rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. A promising approach towards bridging this gap are the Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models that perform inference on their inputs. What is still missing are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers.Here, a learning rule for hierarchically organized SbS networks is derived. The properties of this approach are investigated and its functionality demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits (MNIST) is presented. When applied together with an optimizer this learning method achieves a classification performance of roughly 99.3% on the MNIST test data. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision that with this learning rule SBS networks will provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.


Author(s):  
Petr Svec ◽  
Max Schwartz ◽  
Atul Thakur ◽  
Davinder K. Anand ◽  
Satyandra K. Gupta

This paper describes a computational framework for automatically synthesizing planning logic for unmanned surface vehicles (USVs). The basic idea behind our approach is as follows. The USV explores the virtual environment by randomly trying different moves. USV moves are simulated in the virtual environment and evaluated based on their ability to make progress towards the mission goal. If a successful action is identified as a part of the random exploration, then this action is integrated into the logic driving the USV. This approach has been utilized for automatically generating planning logic for USVs. The planning logic is represented as a decision tree which consists of high-level controllers as building blocks, conditionals and other program constructs. We used strongly-typed GP-based evolutionary framework for automatic generation of planning logic for blocking the advancement of a computer-driven intruder boat toward a valuable target. Our results show that a genetic programming based synthesis framework is capable of generating decision trees expressing useful logic for blocking the advancements of an enemy boat.


2020 ◽  
Vol 34 (04) ◽  
pp. 3882-3889
Author(s):  
Wolfgang Fuhl ◽  
Gjergji Kasneci ◽  
Wolfgang Rosenstiel ◽  
Enkeljda Kasneci

We present an alternative layer to convolution layers in convolutional neural networks (CNNs). Our approach reduces the complexity of convolutions by replacing it with binary decisions. Those binary decisions are used as indexes to conditional distributions where each weight represents a leaf in a decision tree. This means that only the indices to the weights need to be determined once, thus reducing the complexity of convolutions by the depth of the output tensor. Index computation is performed by simple binary decisions that require fewer cycles compared to conventionally used multiplications. In addition, we show how convolutions can be replaced by binary decisions. These binary decisions form indices in the conditional distributions and we show how they are used to replace 2D weight matrices as well as 3D weight tensors. These new layers can be trained like convolution layers in CNNs based on the backpropagation algorithm, for which we provide a formalization. Our results on multiple publicly available data sets show that our approach performs similar to conventional neuronal networks. Beyond the formalized reduction of complexity and the improved qualitative performance, we show the runtime improvement empirically compared to convolution layers.


2020 ◽  
Vol 08 (03) ◽  
pp. 203-210
Author(s):  
Han Xiao ◽  
Ge Xu

Though the traditional algorithms could be embedded into neural architectures with the proposed principle of [H. Xiao, Hungarian layer: Logics empowered neural architecture, arXiv: 1712.02555], the variables that only occur in the condition of branch could not be updated as a special case. To tackle this issue, we multiply the conditioned branches with Dirac symbol (i.e., [Formula: see text]), then approximate Dirac symbol with the continuous functions (e.g., [Formula: see text]). In this way, the gradients of condition-specific variables could be worked out in the back-propagation process, approximately, making a fully functional neural graph. Within our novel principle, we propose the neural decision tree (NDT), which takes simplified neural networks as decision function in each branch and employs complex neural networks to generate the output in each leaf. Extensive experiments verify our theoretical analysis and demonstrate the effectiveness of our model.


2020 ◽  
Vol 34 (04) ◽  
pp. 6413-6421
Author(s):  
Mike Wu ◽  
Sonali Parbhoo ◽  
Michael Hughes ◽  
Ryan Kindle ◽  
Leo Celi ◽  
...  

The lack of interpretability remains a barrier to adopting deep neural networks across many safety-critical domains. Tree regularization was recently proposed to encourage a deep neural network's decisions to resemble those of a globally compact, axis-aligned decision tree. However, it is often unreasonable to expect a single tree to predict well across all possible inputs. In practice, doing so could lead to neither interpretable nor performant optima. To address this issue, we propose regional tree regularization – a method that encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more “desirable” optima compared to global analogues.


Sign in / Sign up

Export Citation Format

Share Document