A One-Layer Recurrent Neural Network for Interval-Valued Optimization Problem with Linear Constraints

Author(s):  
Yueqiu Li ◽  
Chunna Zeng ◽  
Bing Li ◽  
Jin Hu
2008 ◽  
Vol 20 (3) ◽  
pp. 844-872 ◽  
Author(s):  
Youshen Xia ◽  
Mohamed S. Kamel

The constrained L1 estimation is an attractive alternative to both the unconstrained L1 estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L1 estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L1 estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L1 estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L1 estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.


2020 ◽  
Vol 42 (13) ◽  
pp. 2382-2395
Author(s):  
Armita Fatemimoghadam ◽  
Hamid Toshani ◽  
Mohammad Manthouri

In this paper, a novel approach is proposed for adjusting the position of a magnetic levitation system using projection recurrent neural network-based adaptive backstepping control (PRNN-ABC). The principles of designing magnetic levitation systems have widespread applications in the industry, including in the production of magnetic bearings and in maglev trains. Levitating a ball in space is carried out via the surrounding attracting or repelling magnetic forces. In such systems, the permissible range of the actuator is significant, especially in practical applications. In the proposed scheme, the procedure of designing the backstepping control laws based on the nonlinear state-space model is carried out first. Then, a constrained optimization problem is formed by defining a performance index and taking into account the control limits. To formulate the recurrent neural network (RNN), the optimization problem is first converted into a constrained quadratic programming (QP). Then, the dynamic model of the RNN is derived based on the Karush-Kuhn-Tucker (KKT) optimization conditions and the variational inequality theory. The convergence analysis of the neural network and the stability analysis of the closed-loop system are performed using the Lyapunov stability theory. The performance of the closed-loop system is assessed with respect to tracking error and control feasibility.


2002 ◽  
Vol 12 (03n04) ◽  
pp. 203-218 ◽  
Author(s):  
GURSEL SERPEN ◽  
JOEL CORRA

This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated throughan algebraic approach. Performance of the proposed neuro-optimizer on a well-known static combinatorial optimization problem, the Traveling Salesman Problem, is evaluated on the basis of computational complexity measures and, subsequently, compared to performance of the Simultaneous Recurrent Neural network trained with the standard backpropagation, and recurrent backpropagation for the same static optimization problem. Simulation results indicate that the Simultaneous Recurrent Neural network trained with the resilient backpropagation algorithm is able to locate superior quality solutions through comparable amount of computational effort for the Traveling Salesman Problem.


Author(s):  
Jaime Duque Domingo ◽  
Jaime Gómez-García-Bermejo ◽  
Eduardo Zalama

AbstractGaze control represents an important issue in the interaction between a robot and humans. Specifically, deciding who to pay attention to in a multi-party conversation is one way to improve the naturalness of a robot in human-robot interaction. This control can be carried out by means of two different models that receive the stimuli produced by the participants in an interaction, either an on-center off-surround competitive network or a recurrent neural network. A system based on a competitive neural network is able to decide who to look at with a smooth transition in the focus of attention when significant changes in stimuli occur. An important aspect in this process is the configuration of the different parameters of such neural network. The weights of the different stimuli have to be computed to achieve human-like behavior. This article explains how these weights can be obtained by solving an optimization problem. In addition, a new model using a recurrent neural network with LSTM layers is presented. This model uses the same set of stimuli but does not require its weighting. This new model is easier to train, avoiding manual configurations, and offers promising results in robot gaze control. The experiments carried out and some results are also presented.


Author(s):  
Rafael Yuste

The mammalian neocortex has distributed excitatory and inhibitory connectivity that, together with the integrative properties of pyramidal cells and their strong synaptic plasticity, make it ideally suited to implement a neural network design. This chapter summarizes results from the author’s research, consistent with the hypothesis that the neocortical microcircuit is a recurrent neural network that builds dynamical attractors. According to this paradigm, the units of function of the cortex would be groups of neurons forming ensembles or assemblies through Hebbian synaptic plasticity. The canonical cortical microcircuit would thus be a general-purpose neural network, fine-tuned by experience to solve any optimization problem.


2020 ◽  
Vol 2020 ◽  
pp. 1-27
Author(s):  
Delara Karbasi ◽  
Mohammad Reza Rabiei ◽  
Alireza Nazemi

Bridge regression is a special family of penalized regressions using a penalty function ∑ A j γ with γ ≥ 1 that for γ = 1 and γ = 2 , it concludes lasso and ridge regression, respectively. In case where the output variable in the regression model was imprecise, we developed a bridge regression model in a fuzzy environment. We also exhibited penalized fuzzy estimates for this model when the input variables were crisp. So, we perform the presented optimization problem for the model that leads to a multiobjective program. Also, we try to determine the shrinkage parameter and the tuning parameter from the same optimization problem. In order to estimate fuzzy coefficients of the proposed model, we introduce a hybrid scheme based on recurrent neural networks. The suggested neural network model is constructed based on some concepts of convex optimization and stability theory which guarantees to find the approximate parameters of the proposed model. We use a simulation study to depict the performance of the proposed bridge technique in the presence of multicollinear data. Furthermore, real data analysis is used to show the performance of the proposed method. A comparison between the fuzzy bridge regression model and several other shrinkage models is made with three different well-known fuzzy criteria. In this study, we visualize the performance of the model by Taylor’s diagram and Bubble plot. Also, we examine the predictive ability of the model, thus, obtained by cross validation. The numerical results clearly showed higher accuracy of the proposed fuzzy bridge method compared to the other existing fuzzy regression models: fuzzy bridge regression model, multiobjective optimization, recurrent neural network, stability convergence, and goodness-of-fit measure.


Sign in / Sign up

Export Citation Format

Share Document