Online Learning Algorithm for Distributed Convex Optimization With Time-Varying Coupled Constraints and Bandit Feedback

2020 ◽  
pp. 1-12
Author(s):  
Jueyou Li ◽  
Chuanye Gu ◽  
Zhiyou Wu ◽  
Tingwen Huang
2018 ◽  
Vol 2018 ◽  
pp. 1-22 ◽  
Author(s):  
Wei Guo ◽  
Tao Xu ◽  
Keming Tang ◽  
Jianjiang Yu ◽  
Shuangshuang Chen

Many real world applications are of time-varying nature and an online learning algorithm is preferred in tracking the real-time changes of the time-varying system. Online sequential extreme learning machine (OSELM) is an excellent online learning algorithm, and some improved OSELM algorithms incorporating forgetting mechanism have been developed to model and predict the time-varying system. But the existing algorithms suffer from a potential risk of instability due to the intrinsic ill-posed problem; besides, the adaptive tracking ability of these algorithms for complex time-varying system is still very weak. In order to overcome the above two problems, this paper proposes a novel OSELM algorithm with generalized regularization and adaptive forgetting factor (AFGR-OSELM). In the AFGR-OSELM, a new generalized regularization approach is employed to replace the traditional exponential forgetting regularization to make the algorithm have a constant regularization effect; consequently the potential ill-posed problem of the algorithm can be completely avoided and a persistent stability can be guaranteed. Moreover, the AFGR-OSELM adopts an adaptive scheme to adjust the forgetting factor dynamically and automatically in the online learning process so as to better track the dynamic changes of the time-varying system and reduce the adverse effects of the outdated data in time; thus it tends to provide desirable prediction results in time-varying environment. Detailed performance comparisons of AFGR-OSELM with other representative algorithms are carried out using artificial and real world data sets. The experimental results show that the proposed AFGR-OSELM has higher prediction accuracy with better stability than its counterparts for predicting time-varying system.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


Sign in / Sign up

Export Citation Format

Share Document