scholarly journals On the learning machine with compensatory aggregation based neurons in quaternionic domain

2018 ◽  
Vol 6 (1) ◽  
pp. 33-48 ◽  
Author(s):  
Sushil Kumar ◽  
Bipin Kumar Tripathi

Abstract The nonlinear spatial grouping process of synapses is one of the fascinating methodologies for neuro-computing researchers to achieve the computational power of a neuron. Generally, researchers use neuron models that are based on summation (linear), product (linear) or radial basis (nonlinear) aggregation for the processing of synapses, to construct multi-layered feed-forward neural networks, but all these neuron models and their corresponding neural networks have their advantages or disadvantages. The multi-layered network generally uses for accomplishing the global approximation of input–output mapping but sometimes getting stuck into local minima, while the nonlinear radial basis function (RBF) network is based on exponentially decaying that uses for local approximation to input–output mapping. Their advantages and disadvantages motivated to design two new artificial neuron models based on compensatory aggregation functions in the quaternionic domain. The net internal potentials of these neuron models are developed with the compositions of basic summation (linear) and radial basis (nonlinear) operations on quaternionic-valued input signals. The neuron models based on these aggregation functions ensure faster convergence, better training, and prediction accuracy. The learning and generalization capabilities of these neurons are verified through various three-dimensional transformations and time series predictions as benchmark problems. Highlights Two new CSU and CPU neuron models for quaternionic signals are proposed. Net potentials based on the compositions of summation and radial basis functions. The nonlinear grouping of synapses achieve the computational power of proposed neurons. The neuron models ensure faster convergence, better training and prediction accuracy. The learning and generalization capabilities of CSU/CPU are verified by various benchmark problems.

Author(s):  
Ahmed Kawther Hussein

<span id="docs-internal-guid-5c723154-7fff-a7b2-3582-b7c2920a9921"><span>Arabic calligraphy is considered a sort of Arabic writing art where letters in Arabic can be written in various curvy or segments styles. The efforts of automating the identification of Arabic calligraphy by using artificial intelligence were less comparing with other languages. Hence, this article proposes using four types of features and a single hidden layer neural network for training on Arabic calligraphy and predicting the type of calligraphy that is used. For neural networks, we compared the case of non-connected input and output layers in extreme learning machine ELM and the case of connected input-output layers in FLN. The prediction accuracy of fast learning machine FLN was superior comparing ELM that showed a variation in the obtained accuracy. </span></span>


Author(s):  
William C. Carpenter ◽  
Margery E. Hoffman

AbstractThis paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.


2012 ◽  
Vol 12 (6) ◽  
pp. 1787-1800 ◽  
Author(s):  
Francisco Fernández-Navarro ◽  
César Hervás-Martínez ◽  
Roberto Ruiz ◽  
Jose C. Riquelme

Author(s):  
WEN-BO ZHAO ◽  
DE-SHUANG HUANG ◽  
JI-YAN DU ◽  
LI-MING WANG

This paper discusses using genetic algorithms (GA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed GA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed GA is also suitable for optimizing the radial basis function neural networks (RBFNN).


2022 ◽  
pp. 58-79
Author(s):  
Son Nguyen ◽  
Matthew Quinn ◽  
Alan Olinsky ◽  
John Quinn

In recent years, with the development of computational power and the explosion of data available for analysis, deep neural networks, particularly convolutional neural networks, have emerged as one of the default models for image classification, outperforming most of the classical machine learning models in this task. On the other hand, gradient boosting, a classical model, has been widely used for tabular structure data and leading data competitions, such as those from Kaggle. In this study, the authors compare the performance of deep neural networks with gradient boosting models for detecting pneumonia using chest x-rays. The authors implement several popular architectures of deep neural networks, such as Resnet50, InceptionV3, Xception, and MobileNetV3, and variants of a gradient boosting model. The authors then evaluate these two classes of models in terms of prediction accuracy. The computation in this study is done using cloud computing services offered by Google Colab Pro.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


Author(s):  
Pragati Priyadarshini Sahu ◽  
Abhilas Swain ◽  
Radha Kanta Sarangi

Sign in / Sign up

Export Citation Format

Share Document