trust region
Recently Published Documents


TOTAL DOCUMENTS

1348
(FIVE YEARS 275)

H-INDEX

55
(FIVE YEARS 6)

Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 38
Author(s):  
Jijun Tong ◽  
Shuai Xu ◽  
Fangliang Wang ◽  
Pengjia Qi

This paper presents a novel method based on a curve descriptor and projection geometry constrained for vessel matching. First, an LM (Leveberg–Marquardt) algorithm is proposed to optimize the matrix of geometric transformation. Combining with parameter adjusting and the trust region method, the error between 3D reconstructed vessel projection and the actual vessel can be minimized. Then, CBOCD (curvature and brightness order curve descriptor) is proposed to indicate the degree of the self-occlusion of blood vessels during angiography. Next, the error matrix constructed from the error of epipolar matching is used in point pairs matching of the vascular through dynamic programming. Finally, the recorded radius of vessels helps to construct ellipse cross-sections and samples on it to get a point set around the centerline and the point set is converted to mesh for reconstructing the surface of vessels. The validity and applicability of the proposed methods have been verified through experiments that result in the significant improvement of 3D reconstruction accuracy in terms of average back-projection errors. Simultaneously, due to precise point-pair matching, the smoothness of the reconstructed 3D coronary artery is guaranteed.


Author(s):  
Nikita Doikov ◽  
Yurii Nesterov

AbstractIn this paper, we develop new affine-invariant algorithms for solving composite convex minimization problems with bounded domain. We present a general framework of Contracting-Point methods, which solve at each iteration an auxiliary subproblem restricting the smooth part of the objective function onto contraction of the initial domain. This framework provides us with a systematic way for developing optimization methods of different order, endowed with the global complexity bounds. We show that using an appropriate affine-invariant smoothness condition, it is possible to implement one iteration of the Contracting-Point method by one step of the pure tensor method of degree $$p \ge 1$$ p ≥ 1 . The resulting global rate of convergence in functional residual is then $${\mathcal {O}}(1 / k^p)$$ O ( 1 / k p ) , where k is the iteration counter. It is important that all constants in our bounds are affine-invariant. For $$p = 1$$ p = 1 , our scheme recovers well-known Frank–Wolfe algorithm, providing it with a new interpretation by a general perspective of tensor methods. Finally, within our framework, we present efficient implementation and total complexity analysis of the inexact second-order scheme $$(p = 2)$$ ( p = 2 ) , called Contracting Newton method. It can be seen as a proper implementation of the trust-region idea. Preliminary numerical results confirm its good practical performance both in the number of iterations, and in computational time.


Author(s):  
Merve Bodur ◽  
Timothy C. Y. Chan ◽  
Ian Yihang Zhu

Inverse optimization—determining parameters of an optimization problem that render a given solution optimal—has received increasing attention in recent years. Although significant inverse optimization literature exists for convex optimization problems, there have been few advances for discrete problems, despite the ubiquity of applications that fundamentally rely on discrete decision making. In this paper, we present a new set of theoretical insights and algorithms for the general class of inverse mixed integer linear optimization problems. Specifically, a general characterization of optimality conditions is established and leveraged to design new cutting plane solution algorithms. Through an extensive set of computational experiments, we show that our methods provide substantial improvements over existing methods in solving the largest and most difficult instances to date.


Author(s):  
Mirko Hahn ◽  
Sven Leyffer ◽  
Sebastian Sager

AbstractWe present a trust-region steepest descent method for dynamic optimal control problems with binary-valued integrable control functions. Our method interprets the control function as an indicator function of a measurable set and makes set-valued adjustments derived from the sublevel sets of a topological gradient function. By combining this type of update with a trust-region framework, we are able to show by theoretical argument that our method achieves asymptotic stationarity despite possible discretization errors and truncation errors during step determination. To demonstrate the practical applicability of our method, we solve two optimal control problems constrained by ordinary and partial differential equations, respectively, and one topological optimization problem.


Author(s):  
Daniel Adrian Maldonado ◽  
Emil M Constantinescu ◽  
Hong Zhang ◽  
Vishwas Rao ◽  
Mihai Anitescu

2022 ◽  
Vol 7 (4) ◽  
pp. 5534-5562
Author(s):  
B. El-Sobky ◽  
◽  
G. Ashry

<abstract><p>In this paper, a nonlinear bilevel programming (NBLP) problem is transformed into an equivalent smooth single objective nonlinear programming (SONP) problem utilized slack variable with a Karush-Kuhn-Tucker (KKT) condition. To solve the equivalent smooth SONP problem effectively, an interior-point Newton's method with Das scaling matrix is used. This method is locally method and to guarantee convergence from any starting point, a trust-region strategy is used. The proposed algorithm is proved to be stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem.</p> <p>A global convergence theory of the proposed algorithm is introduced and applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach.</p></abstract>


2022 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Esmail Abdul Fattah ◽  
Janet Van Niekerk ◽  
Håvard Rue

<p style='text-indent:20px;'>Computing the gradient of a function provides fundamental information about its behavior. This information is essential for several applications and algorithms across various fields. One common application that requires gradients are optimization techniques such as stochastic gradient descent, Newton's method and trust region methods. However, these methods usually require a numerical computation of the gradient at every iteration of the method which is prone to numerical errors. We propose a simple limited-memory technique for improving the accuracy of a numerically computed gradient in this gradient-based optimization framework by exploiting (1) a coordinate transformation of the gradient and (2) the history of previously taken descent directions. The method is verified empirically by extensive experimentation on both test functions and on real data applications. The proposed method is implemented in the <inline-formula><tex-math id="M1">\begin{document}$\texttt{R} $\end{document}</tex-math></inline-formula> package <inline-formula><tex-math id="M2">\begin{document}$ \texttt{smartGrad}$\end{document}</tex-math></inline-formula> and in C<inline-formula><tex-math id="M3">\begin{document}$ \texttt{++} $\end{document}</tex-math></inline-formula>.</p>


Sign in / Sign up

Export Citation Format

Share Document