EXAMINING THE CHIR ALGORITHM PERFORMANCE FOR MULTILAYER NETWORKS AND CONTINUOUS INPUT VECTORS
Learning by Choice of Internal Representations (CHIR) is a training algorithm presented by Grossman et al.1 based on modification of the Internal Representations (IR) along side of the direct weight matrix modification performed in conventional training methods. This algorithm was presented in several versions aimed to tackle the various training problems of nets with continuous and binary weights, multilayer and multi-output-neuron nets and training without storing the Internal Representations. The capability of one of these versions, the CHIR2 algorithm, to tackle multilayer training tasks of nets with continuous input vectors is examined in this paper. A comparison between the performance of this algorithm and of the Backpropagation algorithm2 is carried out via extensive computer simulations for the “two-spirals” problem, aimed to classify two classes of dots forming two intertwined spirals. The CHIR24 algorithm shows a rapid convergence rate for this problem, an order of magnitude faster than the results reported for the BP training algorithm (as well as those obtained by us) regarding the same training problem and network architecture.11 Moreover, the CHIR2 algorithm finds solution nets for the above mentioned problem with reduced architectures, reported as hard to solve by the BP training algorithm.11