A Dynamics-Based Optimal Trajectory Generation for Controlling an Automated Excavator

Author(s):  
S Yoo ◽  
C-G Park ◽  
S-H You ◽  
B Lim

This article presents a new methodology to generate optimal trajectories in controlling an automated excavator. By parameterizing all the actuator displacements with B-splines of the same order and with the same number of control points, the coupled actuator limits, associated with the maximum pump flowrate, are described as the finite-dimensional set of linear constraints to the motion optimization problem. Several weighting functions are introduced on the generalized actuator torque so that the solution to each optimization problems contains the physical meaning. Numerical results showing that the generated motions of the excavator are fairly smooth and effectively save energy, which can prevent mechanical wearing and possibly save fuel consumption, are presented. A typical operator's manoeuvre from experiments is referred to bring out the standing features of the optimized motion.

Author(s):  
Ixshel Jhoselyn Foster-Vázquez ◽  
Rogelio De Jesús Portillo-Vélez ◽  
Eduardo Vazquez-Santacruz

In the engineering design process, it is of particular relevance the problem statement that has to be solved to guarantee an optimal design. There is no general rule for this, and in the particular case of the synthesis of flat mechanisms, the solution strongly depends on the problem statement for the design or mechanism synthesis. The object this paper is presenting one proposal at synthesis problem of a four-bar flat mechanism for cartesian trajectory tracking. The mechanism synthesis problem is stated as a nonlinear optimization problem with non linear constraints. Four different approaches are considered in order to demonstrate the impact of the considered statement of the optimization problem for its solution. The solution of the four optimization problems is obtained by means of numerical calculations using genetic algorithms. The numerical results of the four optimization problem statemens are compared under fair circumstances and they depict the great influence of the initial problem statement for its solution.


Algorithms ◽  
2020 ◽  
Vol 13 (6) ◽  
pp. 143 ◽  
Author(s):  
Piotr M. Marusak

In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may offer insufficient performance if applied to a nonlinear control plant. On the other hand, if a model used for prediction is nonlinear, then non–convex optimization problem must be solved at each algorithm iteration. Then the numerical problems may occur during solving it and the time needed to calculate the control signals cannot be determined. Therefore approaches based on linearized models are preferred in practical applications. A fuzzy algorithm with an advanced generation of the prediction is proposed in the article. The prediction is obtained in such a way that the algorithm is formulated as a quadratic optimization problem but offers performance very close to that of the MPC algorithm with nonlinear optimization. The efficiency of the proposed approach is demonstrated in the control system of a nonlinear chemical control plant—a CSTR (Continuous Stirred–Tank Reactor) with van de Vusse reaction.


2015 ◽  
Vol 23 (3) ◽  
Author(s):  
Daniel Ševčovič ◽  
Mária Trnovská

AbstractWe propose a novel method of resolving the optimal anisotropy function. The idea is to construct the optimal anisotropy function as a solution to the inverse Wulff problem, i.e. as a minimizer for the anisoperimetric ratio for a given Jordan curve in the plane. It leads to a nonconvex quadratic optimization problem with linear matrix inequalities. In order to solve it we propose the so-called enhanced semidefinite relaxation method which is based on a solution to a convex semidefinite problem obtained by a semidefinite relaxation of the original problem augmented by quadratic-linear constraints. We show that the sequence of finite-dimensional approximations of the optimal anisoperimetric ratio converges to the optimal anisoperimetric ratio which is a solution to the inverse Wulff problem. Several computational examples, including those corresponding to boundaries of real snowflakes, and discussion on the rate of convergence of numerical method are also presented in this paper.


1991 ◽  
Vol 15 (3-4) ◽  
pp. 357-379
Author(s):  
Tien Huynh ◽  
Leo Joskowicz ◽  
Catherine Lassez ◽  
Jean-Louis Lassez

We address the problem of building intelligent systems to reason about linear arithmetic constraints. We develop, along the lines of Logic Programming, a unifying framework based on the concept of Parametric Queries and a quasi-dual generalization of the classical Linear Programming optimization problem. Variable (quantifier) elimination is the key underlying operation which provides an oracle to answer all queries and plays a role similar to Resolution in Logic Programming. We discuss three methods for variable elimination, compare their feasibility, and establish their applicability. We then address practical issues of solvability and canonical representation, as well as dynamical updates and feedback. In particular, we show how the quasi-dual formulation can be used to achieve the discriminating characteristics of the classical Fourier algorithm regarding solvability, detection of implicit equalities and, in case of unsolvability, the detection of minimal unsolvable subsets. We illustrate the relevance of our approach with examples from the domain of spatial reasoning and demonstrate its viability with empirical results from two practical applications: computation of canonical forms and convex hull construction.


2009 ◽  
Vol 26 (04) ◽  
pp. 479-502 ◽  
Author(s):  
BIN LIU ◽  
TEQI DUAN ◽  
YONGMING LI

In this paper, a novel genetic algorithm — dynamic ring-like agent genetic algorithm (RAGA) is proposed for solving global numerical optimization problem. The RAGA combines the ring-like agent structure and dynamic neighboring genetic operators together to get better optimization capability. An agent in ring-like agent structure represents a candidate solution to the optimization problem. Any agent interacts with neighboring agents to evolve. With dynamic neighboring genetic operators, they compete and cooperate with their neighbors, and they can also use knowledge to increase energies. Global numerical optimization problems are the most important ones to verify the performance of evolutionary algorithm, especially of genetic algorithm and are mostly of interest to the corresponding researchers. In the corresponding experiments, several complex benchmark functions were used for optimization, several popular GAs were used for comparison. In order to better compare two agents GAs (MAGA: multi-agent genetic algorithm and RAGA), the several dimensional experiments (from low dimension to high dimension) were done. These experimental results show that RAGA not only is suitable for optimization problems, but also has more precise and more stable optimization results.


Author(s):  
T. E. Potter ◽  
K. D. Willmert ◽  
M. Sathyamoorthy

Abstract Mechanism path generation problems which use link deformations to improve the design lead to optimization problems involving a nonlinear sum-of-squares objective function subjected to a set of linear and nonlinear constraints. Inclusion of the deformation analysis causes the objective function evaluation to be computationally expensive. An optimization method is presented which requires relatively few objective function evaluations. The algorithm, based on the Gauss method for unconstrained problems, is developed as an extension of the Gauss constrained technique for linear constraints and revises the Gauss nonlinearly constrained method for quadratic constraints. The derivation of the algorithm, using a Lagrange multiplier approach, is based on the Kuhn-Tucker conditions so that when the iteration process terminates, these conditions are automatically satisfied. Although the technique was developed for mechanism problems, it is applicable to any optimization problem having the form of a sum of squares objective function subjected to nonlinear constraints.


2021 ◽  
Vol 78 (1) ◽  
pp. 139-156
Author(s):  
Antonio Boccuto

Abstract We give some versions of Hahn-Banach, sandwich, duality, Moreau--Rockafellar-type theorems, optimality conditions and a formula for the subdifferential of composite functions for order continuous vector lattice-valued operators, invariant or equivariant with respect to a fixed group G of homomorphisms. As applications to optimization problems with both convex and linear constraints, we present some Farkas and Kuhn-Tucker-type results.


2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


Sign in / Sign up

Export Citation Format

Share Document