scholarly journals STRUCTURAL SHAKEDOWN: A NEW METHODOLOGY FOR ESTIMATING THE RESIDUAL DISPLACEMENTS

2016 ◽  
Vol 22 (8) ◽  
pp. 1055-1065 ◽  
Author(s):  
Liudas LIEPA ◽  
Gediminas BLAŽEVIČIUS ◽  
Dovilė MERKEVIČIŪTĖ ◽  
Juozas ATKOČIŪNAS

A vector of residual forces of the ideally elastic-plastic structure at shakedown is obtained by solving the static analysis problem. A unique distribution of the residual forces is determined if the analysis is based on the minimum complementary deformation energy principle. However, the residual displacements developing in the shakedown process of ideally elastic-plastic structures under variable repeated loads can vary non-monotonically. Nevertheless mathematical models for the optimization problems of steel structures at shakedown must include the conditions for strength (safety) and stiffness (serviceability). Residual displacements determined by the plastic deformations are included in the stiffness conditions; therefore to improve the optimal solution it is necessary to determine upper and lower bounds of the residual displacement variations. This paper describes an improved methodology for estimating the variation bounds of the residual displacements at shakedown.

2014 ◽  
Vol 6 (5) ◽  
pp. 461-467 ◽  
Author(s):  
Liudas Liepa ◽  
Agnė Gervytė ◽  
Ela Jarmolajeva ◽  
Juozas Atkočiūnas

This paper focuses on a shakedown behaviour of the ideally elasto-plastic beams system under variable repeated load. The mathematical models of the analysis problems are created using numerical methods, extremum energy principles and mathematic programming. It is shown that during the shakedown process the residual displacements vary non-monotonically. By solving analysis problem, where the load locus is being progressively expanded, it is possible to determine the upper and lower bounds of residual displacements. Suggested methods are ilustrated by solving multisupported beam example problem. The results are obtained considering principle of the small displacements. Nagrinėjamas idealiai tampriai plastinės lenkiamos strypinės sistemos prisitaikomumo būvis, veikiant kartotinei kintamajai apkrovai. Analizės uždavinių matematiniai modeliai sudaromi, pasitelkus skaitinius metodus, ekstreminius energinius principus ir matematinį programavimą. Parodoma, kad prisitaikant konstrukcijai jos liekamieji poslinkiai gali kisti nemonotoniškai. Išsprendus analizės uždavinį, kuriame progresyviai plečiama apkrovos veikimo sritis, galima nustatyti viršutines ir apatines liekamųjų poslinkių kitimo ribas. Siūloma metodika iliustruota daugiaatramės sijos liekamųjų poslinkių skaičiavimo pavyzdžiu. Rezultatai gauti, esant mažų poslinkių prielaidai.


2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


Author(s):  
Quentin Cappart ◽  
Emmanuel Goutierre ◽  
David Bergman ◽  
Louis-Martin Rousseau

Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams (DDs) have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. It is well known that the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem. In this paper, we propose an innovative and generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with relaxed and restricted DDs. We apply the approach to both the Maximum Independent Set Problem and the Maximum Cut Problem. Experimental results on synthetic instances show that the deep reinforcement learning approach, by achieving tighter objective function bounds, generally outperforms ordering methods commonly used in the literature when the distribution of instances is known. To the best knowledge of the authors, this is the first paper to apply machine learning to directly improve relaxation bounds obtained by general-purpose bounding mechanisms for combinatorial optimization problems.


2021 ◽  
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß ◽  
Manuel Schmitt ◽  
Rolf Wanka

AbstractMeta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain “easy” reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Hamid Reza Erfanian ◽  
M. H. Noori Skandari ◽  
A. V. Kamyad

We present a new approach for solving nonsmooth optimization problems and a system of nonsmooth equations which is based on generalized derivative. For this purpose, we introduce the first order of generalized Taylor expansion of nonsmooth functions and replace it with smooth functions. In other words, nonsmooth function is approximated by a piecewise linear function based on generalized derivative. In the next step, we solve smooth linear optimization problem whose optimal solution is an approximate solution of main problem. Then, we apply the results for solving system of nonsmooth equations. Finally, for efficiency of our approach some numerical examples have been presented.


2012 ◽  
Vol 215-216 ◽  
pp. 592-596
Author(s):  
Li Gao ◽  
Rong Rong Wang

In order to deal with complex product design optimization problems with both discrete and continuous variables, mix-variable collaborative design optimization algorithm is put forward based on collaborative optimization, which is an efficient way to solve mix-variable design optimization problems. On the rule of “divide and rule”, the algorithm decouples the problem into some relatively simple subsystems. Then by using collaborative mechanism, the optimal solution is obtained. Finally, the result of a case shows the feasibility and effectiveness of the new algorithm.


1995 ◽  
Vol 117 (1) ◽  
pp. 155-157 ◽  
Author(s):  
F. C. Anderson ◽  
J. M. Ziegler ◽  
M. G. Pandy ◽  
R. T. Whalen

We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.


2018 ◽  
Vol 763 ◽  
pp. 295-300 ◽  
Author(s):  
Khaled Saif ◽  
Chin Long Lee ◽  
Trevor Yeow ◽  
Gregory A. MacRae

Nonlinear time history analyses of SDOF bridge columns with elasto-plastic flexural behaviour which are subject to eccentric gravity loading are conducted to quantify the effect of ratchetting. Peak and residual displacements were used as indicators of the degree of ratchetting. The effects of member axial loads and design force reduction factors were also investigated. It was shown that displacement demands increased with increasing eccentric moment. For eccentric moment of 30% of the yield moment, the average maximum and residual displacements increase by 4.2 and 3.8 times the maximum displacement, respectively, which the engineers calculate using static methods without considering ratchetting effect. Design curves for estimating the displacement demands for different eccentric moments are also developed. The current NZ1170.5 (2016) provisions were found to be inadequate in estimating the maximum displacement for steel structures, and hence, new provisions for steel structures should be presented.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Fouzia Amir ◽  
Ali Farajzadeh ◽  
Jehad Alzabut

Abstract Multiobjective optimization is the optimization with several conflicting objective functions. However, it is generally tough to find an optimal solution that satisfies all objectives from a mathematical frame of reference. The main objective of this article is to present an improved proximal method involving quasi-distance for constrained multiobjective optimization problems under the locally Lipschitz condition of the cost function. An instigation to study the proximal method with quasi distances is due to its widespread applications of the quasi distances in computer theory. To study the convergence result, Fritz John’s necessary optimality condition for weak Pareto solution is used. The suitable conditions to guarantee that the cluster points of the generated sequences are Pareto–Clarke critical points are provided.


Sign in / Sign up

Export Citation Format

Share Document