first order optimality conditions
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 4)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Tobias Sproll ◽  
Anton Schiela

Abstract In medical treatment, it can be necessary to know the position of a motor unit in a muscle. Recent advances in high-density surface Electromyography (EMG) measurement have opened the possibility of extracting information about single motor units. We present a mathematical approach to identify these motor units. On the base of an electrostatic forward model, we introduce an adjoint approach to efficiently simulate a surface EMG measurement and an optimal control approach to identify these motor units. We show basic results on existence of solutions and first-order optimality conditions.


Author(s):  
Martin Burger ◽  
Lisa Maria Kreusser ◽  
Claudia Totzeck

We propose a mean-field optimal control problem for the parameter identification of a  given pattern. The cost functional is based on the Wasserstein distance between the probability measures of the modeled and the desired patterns. The first-order optimality conditions corresponding to the optimal control problem are derived using a Lagrangian approach on the mean-field level. Based on these conditions we propose a gradient descent method to identify relevant parameters such as angle of rotation  and force scaling which may be spatially inhomogeneous. We discretize the first-order optimality conditions in order to employ the algorithm on the particle level.  Moreover, we prove a rate for the convergence of the controls as the number of particles used for the discretization tends to infinity. Numerical results for the spatially homogeneous case demonstrate the feasibility of the approach.


Author(s):  
Lipu Zhou ◽  
Yi Yang ◽  
Montiel Abello ◽  
Michael Kaess

This paper proposes a novel algorithm to solve the pose estimation problem from 2D/3D line correspondences, known as the Perspective-n-Line (PnL) problem. It is widely known that minimizing the geometric distance generally results in more accurate results than minimizing an algebraic distance. However, the rational form of the reprojection distance of the line yields a complicated cost function, which makes solving the first-order optimality conditions infeasible. Furthermore, iterative algorithms based on the reprojection distance are time-consuming for a large-scale problem. In contrast to previous works which minimize a cost function based on an algebraic distance that may not approximate the reprojection distance of the line, we design two simple algebraic distances to gradually approximate the reprojection distance. This speeds up the computation, and maintains the robustness of the geometric distance. The two algebraic distances result in two polynomial cost functions, which can be efficiently solved. We directly solve the first-order optimality conditions of the first problem with a novel hidden variable method. This algorithm makes use of the specific structure of the resulting polynomial system, therefore it is more stable than the general Gröbner basis polynomial solver. Then, we minimize the second polynomial cost function by the damped Newton iteration, starting from the solution of the first cost function. Experimental results show that the first step of our algorithm is already superior to the state-of-the-art algorithms in terms of accuracy and applicability, and faster than the algorithms based on Gröbner basis polynomial solver. The second step yields comparable results to the results from minimizing the reprojection distance, but is much more efficient. For speed, our algorithm is applicable to real-time applications.


2015 ◽  
Vol 14 (04) ◽  
pp. 747-767 ◽  
Author(s):  
Vsevolod I. Ivanov

In this paper, we obtain second- and first-order optimality conditions of Kuhn–Tucker type and Fritz John one for weak efficiency in the vector problem with inequality constraints. In the necessary conditions, we suppose that the objective function and the active constraints are continuously differentiable. We introduce notions of KTSP-invex problem and second-order KTSP-invex one. We obtain that the vector problem is (second-order) KTSP-invex if and only if for every triple [Formula: see text] with Lagrange multipliers [Formula: see text] and [Formula: see text] for the objective function and constraints, respectively, which satisfies the (second-order) necessary optimality conditions, the pair [Formula: see text] is a saddle point of the scalar Lagrange function with a fixed multiplier [Formula: see text]. We introduce notions second-order KT-pseudoinvex-I, second-order KT-pseudoinvex-II, second-order KT-invex problems. We prove that every second-order Kuhn–Tucker stationary point is a weak global Pareto minimizer (global Pareto minimizer) if and only if the problem is second-order KT-pseudoinvex-I (KT-pseudoinvex-II). It is derived that every second-order Kuhn–Tucker stationary point is a global solution of the weighting problem if and only if the vector problem is second-order KT-invex.


Sign in / Sign up

Export Citation Format

Share Document