scholarly journals Revisiting Projection-Free Optimization for Strongly Convex Constraint Sets

Author(s):  
Jarrid Rector-Brooks ◽  
Jun-Kun Wang ◽  
Barzan Mozafari

We revisit the Frank-Wolfe (FW) optimization under strongly convex constraint sets. We provide a faster convergence rate for FW without line search, showing that a previously overlooked variant of FW is indeed faster than the standard variant. With line search, we show that FW can converge to the global optimum, even for smooth functions that are not convex, but are quasi-convex and locally-Lipschitz. We also show that, for the general case of (smooth) non-convex functions, FW with line search converges with high probability to a stationary point at a rate of O(1/t), as long as the constraint set is strongly convex—one of the fastest convergence rates in non-convex optimization.

2015 ◽  
Vol 23 (3) ◽  
pp. 129-149 ◽  
Author(s):  
Stefania Petra

Abstract We show that the Sparse Kaczmarz method is a particular instance of the coordinate gradient method applied to an unconstrained dual problem corresponding to a regularized ℓ1-minimization problem subject to linear constraints. Based on this observation and recent theoretical work concerning the convergence analysis and corresponding convergence rates for the randomized block coordinate gradient descent method, we derive block versions and consider randomized ordering of blocks of equations. Convergence in expectation is thus obtained as a byproduct. By smoothing the ℓ1-objective we obtain a strongly convex dual which opens the way to various acceleration schemes.


Author(s):  
José Luis Gracia ◽  
Martin Stynes

AbstractFinite difference methods for approximating fractional derivatives are often analyzed by determining their order of consistency when applied to smooth functions, but the relationship between this measure and their actual numerical performance is unclear. Thus in this paper several wellknown difference schemes are tested numerically on simple Riemann-Liouville and Caputo boundary value problems posed on the interval [0, 1] to determine their orders of convergence (in the discrete maximum norm) in two unexceptional cases: (i) when the solution of the boundary-value problem is a polynomial (ii) when the data of the boundary value problem is smooth. In many cases these tests reveal gaps between a method’s theoretical order of consistency and its actual order of convergence. In particular, numerical results show that the popular shifted Gr¨unwald-Letnikov scheme fails to converge for a Riemann-Liouville example with a polynomial solution p(x), and a rigorous proof is given that this scheme (and some other schemes) cannot yield a convergent solution when p(0)≠ 0.


2020 ◽  
Vol 34 (04) ◽  
pp. 6162-6169
Author(s):  
Guanghui Wang ◽  
Shiyin Lu ◽  
Yao Hu ◽  
Lijun Zhang

We aim to design universal algorithms for online convex optimization, which can handle multiple common types of loss functions simultaneously. The previous state-of-the-art universal method has achieved the minimax optimality for general convex, exponentially concave and strongly convex loss functions. However, it remains an open problem whether smoothness can be exploited to further improve the theoretical guarantees. In this paper, we provide an affirmative answer by developing a novel algorithm, namely UFO, which achieves O(√L*), O(d log L*) and O(log L*) regret bounds for the three types of loss functions respectively under the assumption of smoothness, where L* is the cumulative loss of the best comparator in hindsight, and d is dimensionality. Thus, our regret bounds are much tighter when the comparator has a small loss, and ensure the minimax optimality in the worst case. In addition, it is worth pointing out that UFO is the first to achieve the O(log L*) regret bound for strongly convex and smooth functions, which is tighter than the existing small-loss bound by an O(d) factor.


2006 ◽  
Vol 23 (01) ◽  
pp. 107-122 ◽  
Author(s):  
MIN SUN ◽  
ZHEN-JUN SHI

In this paper, by using a modified smoothing function, we propose a new continuation method for complementarity problems with R0-function and P0-function in the absence of strict complementarity. At each iteration, the continuation method solves one linear system of equations and performs one line search. When the underlying mapping is both a P0-function and a R0-function and its Hessian is Lipschitz continuous, we prove the global convergence of the new method. The new method also has global Q-linear and local Q-quadratical convergence rates under the same conditions.


Author(s):  
Bram Demeulenaere ◽  
Jan Swevers ◽  
Joris De Schutter

The designer’s main challenge when counterweight balancing a linkage is to determine the counterweights that realize an optimal trade-off between the dynamic forces of interest. This problem is often formulated as an optimization problem that is generally nonlinear and therefore suffers from local optima. It has been shown earlier, however, that, through a proper parametrization of the counterweights, a convex program can be obtained. Convex programs are nonlinear optimization problems of which the global optimum is guaranteed to be found with great efficiency. The present paper extends this previous work in two respects: (i) the methodology is generalized from four-bar to planar N-bar (rigid) linkages and (ii) it is shown that requiring the counterweights to be realizable in practice can be cast as a convex constraint. Numerical results for a Watt six-bar linkage suggest much more balancing potential for six-bar linkages than for four-bar linkages.


1992 ◽  
Vol 114 (2) ◽  
pp. 245-250 ◽  
Author(s):  
S. Krishnamurty ◽  
David A. Turcic

The paper presents the development of a sub-Jacobian based method for the identification and elimination of branch defects during synthesis in nondyadic planar multiloop mechanisms. Branching occurrences in a given mechanism can be recognized by the resulting changes in the configurations of one or more of the sets of three constraints that form the mechanism. Further, any such change in a constraint set configuration will be characterized by a corresponding change in the determinant sign of its sub-Jacobian matrix. Applying this method, branching can be eliminated during synthesis by identifying all such constraint sets for the mechanism and using the determinant signs of their sub-Jacobian matrices to maintain the mechanism’s original configuration.


Sign in / Sign up

Export Citation Format

Share Document