The Cramming, Softening and Integrating Learning Algorithm with Parametric ReLU Activation Function for Binary Input/Output Problems

Author(s):  
Yu-Han Tsai ◽  
Yu-Jie Jheng ◽  
Rua-Huan Tsaih
Author(s):  
EDGE C. YEH ◽  
SHAO HOW LU

In this paper, the hysteresis characterization in fuzzy spaces is presented by utilizing a fuzzy learning algorithm to generate fuzzy rules automatically from numerical data. The hysteresis phenomenon is first described to analyze its underlying mechanism. Then a fuzzy learning algorithm is presented to learn the hysteresis phenomenon and is used for predicting a simple hysteresis phenomenon. The results of learning are illustrated by mesh plots and input-output relation plots. Furthermore, the dependency of prediction accuracy on the number of fuzzy sets is studied. The method provides a useful tool to model the hysteresis phenomenon in fuzzy spaces.


2010 ◽  
Vol 20 (01) ◽  
pp. 75-86 ◽  
Author(s):  
R. BELMONTE-IZQUIERDO ◽  
S. CARLOS-HERNANDEZ ◽  
E. N. SANCHEZ

In this paper, a recurrent high order neural observer (RHONO) for anaerobic processes is proposed. The main objective is to estimate variables of methanogenesis: biomass, substrate and inorganic carbon in a completely stirred tank reactor (CSTR). The recurrent high order neural network (RHONN) structure is based on the hyperbolic tangent as activation function. The learning algorithm is based on an extended Kalman filter (EKF). The applicability of the proposed scheme is illustrated via simulation. A validation using real data from a lab scale process is included. Thus, this observer can be successfully implemented for control purposes.


2011 ◽  
Vol 21 (03) ◽  
pp. 247-263 ◽  
Author(s):  
J. P. FLORIDO ◽  
H. POMARES ◽  
I. ROJAS

In function approximation problems, one of the most common ways to evaluate a learning algorithm consists in partitioning the original data set (input/output data) into two sets: learning, used for building models, and test, applied for genuine out-of-sample evaluation. When the partition into learning and test sets does not take into account the variability and geometry of the original data, it might lead to non-balanced and unrepresentative learning and test sets and, thus, to wrong conclusions in the accuracy of the learning algorithm. How the partitioning is made is therefore a key issue and becomes more important when the data set is small due to the need of reducing the pessimistic effects caused by the removal of instances from the original data set. Thus, in this work, we propose a deterministic data mining approach for a distribution of a data set (input/output data) into two representative and balanced sets of roughly equal size taking the variability of the data set into consideration with the purpose of allowing both a fair evaluation of learning's accuracy and to make reproducible machine learning experiments usually based on random distributions. The sets are generated using a combination of a clustering procedure, especially suited for function approximation problems, and a distribution algorithm which distributes the data set into two sets within each cluster based on a nearest-neighbor approach. In the experiments section, the performance of the proposed methodology is reported in a variety of situations through an ANOVA-based statistical study of the results.


1999 ◽  
Vol 11 (5) ◽  
pp. 1069-1077 ◽  
Author(s):  
Danilo P. Mandic ◽  
Jonathon A. Chambers

A relationship between the learning rate η in the learning algorithm, and the slope β in the nonlinear activation function, for a class of recurrent neural networks (RNNs) trained by the real-time recurrent learning algorithm is provided. It is shown that an arbitrary RNN can be obtained via the referent RNN, with some deterministic rules imposed on its weights and the learning rate. Such relationships reduce the number of degrees of freedom when solving the nonlinear optimization task of finding the optimal RNN parameters.


2021 ◽  
Vol 502 (3) ◽  
pp. 3200-3209
Author(s):  
Young-Soo Jo ◽  
Yeon-Ju Choi ◽  
Min-Gi Kim ◽  
Chang-Ho Woo ◽  
Kyoung-Wook Min ◽  
...  

ABSTRACT We constructed a far-ultraviolet (FUV) all-sky map based on observations from the Far Ultraviolet Imaging Spectrograph (FIMS) aboard the Korean microsatellite Science and Technology SATellite-1. For the ${\sim}20{{\ \rm per\ cent}}$ of the sky not covered by FIMS observations, predictions from a deep artificial neural network were used. Seven data sets were chosen for input parameters, including five all-sky maps of H α, E(B − V), N(H i), and two X-ray bands, with Galactic longitudes and latitudes. 70 ${{\ \rm per\ cent}}$ of the pixels of the observed FIMS data set were randomly selected for training as target parameters and the remaining 30 ${{\ \rm per\ cent}}$ were used for validation. A simple four-layer neural network architecture, which consisted of three convolution layers and a dense layer at the end, was adopted, with an individual activation function for each convolution layer; each convolution layer was followed by a dropout layer. The predicted FUV intensities exhibited good agreement with Galaxy Evolution Explorer observations made in a similar FUV wavelength band for high Galactic latitudes. As a sample application of the constructed map, a dust scattering simulation was conducted with model optical parameters and a Galactic dust model for a region that included observed and predicted pixels. Overall, FUV intensities in the observed and predicted regions were reproduced well.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yumin Dong ◽  
Xiang Li ◽  
Wei Liao ◽  
Dong Hou

In this paper, a quantum neural network with multilayer activation function is proposed by using multilayer Sigmoid function superposition and learning algorithm to adjust quantum interval. On this basis, the quasiuniform stability of fractional quantum neural networks with mixed delays is studied. According to the order of two different cases, the conditions of quasi uniform stability of networks are given by using the techniques of linear matrix inequality analysis, and the sufficiency of the conditions is proved. Finally, the feasibility of the conclusion is verified by experiments.


2020 ◽  
Vol 9 (2) ◽  
pp. e188922128
Author(s):  
Fábio Nogueira da Silva ◽  
João Viana Fonseca Neto

A heuristic for tuning and convergence analysis of the reinforcement learning algorithm for control with output feedback with only input / output data generated by a model is presented. To promote convergence analysis, it is necessary to perform the parameter adjustment in the algorithms used for data generation, and iteratively solve the control problem. A heuristic is proposed to adjust the data generator parameters creating surfaces to assist in the convergence and robustness analysis process of the optimal online control methodology. The algorithm tested is the discrete linear quadratic regulator (DLQR) with output feedback, based on reinforcement learning algorithms through temporal difference learning in the policy iteration scheme to determine the optimal policy using input / output data only. In the policy iteration algorithm, recursive least squares (RLS) is used to estimate online parameters associated with output feedback DLQR. After applying the proposed tuning heuristics, the influence of the parameters could be clearly seen, and the convergence analysis facilitated.


Sign in / Sign up

Export Citation Format

Share Document