convex objective function
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 13)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Jingyan Xu ◽  
Frédéric Noo

Abstract We are interested in learning the hyperparameters in a convex objective function in a supervised setting. The complex relationship between the input data to the convex problem and the desirable hyperparameters can be modeled by a neural network; the hyperparameters and the data then drive the convex minimization problem, whose solution is then compared to training labels. In our previous work [1], we evaluated a prototype of this learning strategy in an optimization-based sinogram smoothing plus FBP reconstruction framework. A question arising in this setting is how to efficiently compute (backpropagate) the gradient from the solution of the optimization problem, to the hyperparameters to enable end-to-end training. In this work, we first develop general formulas for gradient backpropagation for a subset of convex problems, namely the proximal mapping. To illustrate the value of the general formulas and to demonstrate how to use them, we consider the specific instance of 1-D quadratic smoothing (denoising) whose solution admits a dynamic programming (DP) algorithm. The general formulas lead to another DP algorithm for exact computation of the gradient of the hyperparameters. Our numerical studies demonstrate a 55%- 65% computation time savings by providing a custom gradient instead of relying on automatic differentiation in deep learning libraries. While our discussion focuses on 1-D quadratic smoothing, our initial results (not presented) support the statement that the general formulas and the computational strategy apply equally well to TV or Huber smoothing problems on simple graphs whose solutions can be computed exactly via DP.


Author(s):  
Youssef Hami ◽  
Chakir Loqman

This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.


2021 ◽  
Vol 5 (3) ◽  
pp. 110
Author(s):  
Shashi Kant Mishra ◽  
Predrag Rajković ◽  
Mohammad Esmael Samei ◽  
Suvra Kanti Chakraborty ◽  
Bhagwat Ram ◽  
...  

We present an algorithm for solving unconstrained optimization problems based on the q-gradient vector. The main idea used in the algorithm construction is the approximation of the classical gradient by a q-gradient vector. For a convex objective function, the quasi-Fejér convergence of the algorithm is proved. The proposed method does not require the boundedness assumption on any level set. Further, numerical experiments are reported to show the performance of the proposed method.


Author(s):  
Bo Jiang ◽  
Haoyue Wang ◽  
Shuzhong Zhang

This paper is concerned with finding an optimal algorithm for minimizing a composite convex objective function. The basic setting is that the objective is the sum of two convex functions: the first function is smooth with up to the dth-order derivative information available, and the second function is possibly nonsmooth, but its proximal tensor mappings can be computed approximately in an efficient manner. The problem is to find—in that setting—the best possible (optimal) iteration complexity for convex optimization. Along that line, for the smooth case (without the second nonsmooth part in the objective), Nesterov proposed an optimal algorithm for the first-order methods ([Formula: see text]) with iteration complexity [Formula: see text], whereas high-order tensor algorithms (using up to general dth-order tensor information) with iteration complexity [Formula: see text] were recently established. In this paper, we propose a new high-order tensor algorithm for the general composite case, with the iteration complexity of [Formula: see text], which matches the lower bound for the dth-order methods as previously established and hence is optimal. Our approach is based on the accelerated hybrid proximal extragradient (A-HPE) framework proposed by Monteiro and Svaiter, where a bisection procedure is installed for each A-HPE iteration. At each bisection step, a proximal tensor subproblem is approximately solved, and the total number of bisection steps per A-HPE iteration is shown to be bounded by a logarithmic factor in the precision required.


Author(s):  
Saeed Ketabchi ◽  
Hossein Moosaei ◽  
Milan Hladik

We discuss some basic concepts and present a  numerical procedure  for  finding  the minimum-norm  solution  of  convex quadratic programs (QPs)  subject to linear  equality and inequality   constraints.   Our  approach is based on a  theorem of    alternatives  and  on a convenient  characterization of the solution set of convex QPs.  We   show  that this  problem can be reduced to a simple constrained minimization problem with     a once-differentiable convex  objective  function. We use finite termination of an appropriate  Newton's method to  solve this problem.  Numerical results show that the proposed method is efficient.


Author(s):  
Haitham Khedr ◽  
James Ferlez ◽  
Yasser Shoukry

AbstractNeural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs – i.e. polytopic specifications on the input and output of the network. Like some other approaches, ours uses a relaxed convex program to mitigate the combinatorial complexity of the problem. However, unique in our approach is the way we use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions. In particular, we encode each ReLU by means of the usual linear constraints, and combine this with a convex objective function that penalizes the discrepancy between the output of each neuron and its relaxation. This convex function is further structured to force the largest relaxations to appear closest to the input layer; this provides the further benefit that the most “problematic” neurons are conditioned as early as possible, when conditioning layer by layer. This paradigm can be leveraged to create a verification algorithm that is not only faster in general than competing approaches, but is also able to verify considerably more safety properties; we evaluated PEREGRiNN on a standard MNIST robustness verification suite to substantiate these claims.


Author(s):  
Lei Wang ◽  
Hui Huang

Image reconstruction in fluorescence molecular tomography involves seeking stable and meaningful solutions via the inversion of a highly under-determined and severely ill-posed linear mapping. An attractive scheme consists of minimizing a convex objective function that includes a quadratic error term added to a convex and nonsmooth sparsity-promoting regularizer. Choosing [Formula: see text]-norm as a particular case of a vast class of nonsmooth convex regularizers, our paper proposes a low per-iteration complexity gradient-based first-order optimization algorithm for the [Formula: see text]-regularized least squares inverse problem of image reconstruction. Our algorithm relies on a combination of two ideas applied to the nonsmooth convex objective function: Moreau–Yosida regularization and inertial dynamics-based acceleration. We also incorporate into our algorithm a gradient-based adaptive restart strategy to further enhance the practical performance. Extensive numerical experiments illustrate that in several representative test cases (covering different depths of small fluorescent inclusions, different noise levels and different separation distances between small fluorescent inclusions), our algorithm can significantly outperform three state-of-the-art algorithms in terms of CPU time taken by reconstruction, despite almost the same reconstructed images produced by each of the four algorithms.


2020 ◽  
Vol 32 (3) ◽  
pp. 531-546 ◽  
Author(s):  
Lingxun Kong ◽  
Christos T Maravelias

We propose mixed-integer programming models for fitting univariate discrete data points with continuous piecewise linear (PWL) functions. The number of approximating function segments and the locations of break points are optimized simultaneously. The proposed models include linear constraints and convex objective function and, thus, are computationally more efficient than previously proposed mixed-integer nonlinear programming models. We also show how the proposed models can be extended to approximate univariate functions with PWL functions with the minimum number of segments subject to bounds on the pointwise error.


2020 ◽  
Vol 36 (1) ◽  
pp. 141-146
Author(s):  
SIMEON REICH ◽  
ALEXANDER J. ZASLAVSKI

"Given a Lipschitz and convex objective function of an unconstrained optimization problem, defined on a Banach space, we revisit the class of regular vector fields which was introduced in our previous work on descent methods. We study, in particular, the asymptotic behavior of the sequence of values of the objective function for a certain inexact process generated by a regular vector field when the sequence of computational errors converges to zero and show that this sequence of values converges to the infimum of the given objective function of the unconstrained optimization problem."


2020 ◽  
Vol 11 (1) ◽  
pp. 19-34 ◽  
Author(s):  
Stefania Bellavia ◽  
Nataša Krklec Jerinkić ◽  
Greta Malaspina

AbstractThis paper deals with subsampled spectral gradient methods for minimizing finite sums. Subsample function and gradient approximations are employed in order to reduce the overall computational cost of the classical spectral gradient methods. The global convergence is enforced by a nonmonotone line search procedure. Global convergence is proved provided that functions and gradients are approximated with increasing accuracy. R-linear convergence and worst-case iteration complexity is investigated in case of strongly convex objective function. Numerical results on well known binary classification problems are given to show the effectiveness of this framework and analyze the effect of different spectral coefficient approximations arising from the variable sample nature of this procedure.


Sign in / Sign up

Export Citation Format

Share Document