approximation technique
Recently Published Documents


TOTAL DOCUMENTS

548
(FIVE YEARS 120)

H-INDEX

27
(FIVE YEARS 3)

Author(s):  
Xiao Ma ◽  
Bo Zhou ◽  
Shifeng Xue

Piezoelectric materials have played an important role in industry due to a number of beneficial properties. However, most numerical methods for the piezoelectric materials need mesh, in which the mesh generation and remeshing are prominent difficulties. This paper proposes a Hermite interpolation element-free Galerkin method (HIEFGM) for piezoelectric materials, where the Hermite approximate approach and interpolation element-free Galerkin method (IEFGM) are combined. Based on the constitutive equation, geometric equation, and Galerkin integral weak form, the HIEFGM formulation for piezoelectric materials is established. In the proposed method, the problem domain is discretized by many nodes rather than the meshes, so the pre-processing of numerical computation is simplified. Furthermore, a new approximation technique based on the moving least squares method and Hermite approximate approach is used to derive the approximation function of field quantities. The derived approximation function has the Kronecker delta property and considers the field quantity normal derivatives of boundary nodes, which avoids the problem of imposing the essential boundary conditions and improves the accuracy of meshless approximation. The effects of the scaling factor, node density, and node arrangement on the accuracy of the proposed method are investigated. Numerical examples are given for assessing the proposed method and the results uniformly demonstrate the proposed method has excellent performance in analyzing piezoelectric materials.


2021 ◽  
Vol 12 (1) ◽  
pp. 407
Author(s):  
Tianshan Dong ◽  
Shenyan Chen ◽  
Hai Huang ◽  
Chao Han ◽  
Ziqi Dai ◽  
...  

Truss size and topology optimization problems have recently been solved mainly by many different metaheuristic methods, and these methods usually require a large number of structural analyses due to their mechanism of population evolution. A branched multipoint approximation technique has been introduced to decrease the number of structural analyses by establishing approximate functions instead of the structural analyses in Genetic Algorithm (GA) when GA addresses continuous size variables and discrete topology variables. For large-scale trusses with a large number of design variables, an enormous change in topology variables in the GA causes a loss of approximation accuracy and then makes optimization convergence difficult. In this paper, a technique named the label–clip–splice method is proposed to improve the above hybrid method in regard to the above problem. It reduces the current search domain of GA gradually by clipping and splicing the labeled variables from chromosomes and optimizes the mixed-variables model efficiently with an approximation technique for large-scale trusses. Structural analysis of the proposed method is extremely reduced compared with these single metaheuristic methods. Numerical examples are presented to verify the efficacy and advantages of the proposed technique.


2021 ◽  
Author(s):  
Hayder F.N. Al-Shuka ◽  
Burkhard Corves ◽  
Ehab N. Abbas

Abstract This work deals with control of rigid link robotic manipulators provided with flexible joints. Due to presence of flexible joint dynamics, additional degrees of freedom and underactuation are developed that would complicate the control design. Besides, model uncertainties, unmodeled dynamics and disturbances should be considered in robot modeling and control. Therefore, this paper proposes a cascade position-torque control strategy based on function approximation technique (FAT). The key idea is to design two nested loops: 1) an outer position control loop for tracking reference trajectory, and 2) an inner joint torque control loop to track the desired joint torque resulted from the outer position loop. The torque control loop makes the robot system more adaptable and compliant for sudden disturbances. It increases the perception capability for the target robot mechanisms. Adaptive approximation control (AAC) is used as a strong tool for dealing with time-varying uncertain parameters and disturbances. A sliding mode term is easily integrated with control law structure; however, a constraint on feedback gains are established for compensating modeling (approximation) error. The proposed control architecture can be easily used for high degrees of freedom robotic system due to the decentralized behavior of the AAC. A two-link manipulator is used for simulation experiments.The simulated robot is commanded to move from rest to desired step references considering three cases depending on the selected value of the sliding mode time constant. It is shown that selection of a large time constant parameter related to the position loop leads to slow response. Besides, one of the inherent issues associated with the inner torque control is the presence of derivative of desired joint torque that makes the input control abruptly jumping at the beginning of the dynamic response. To end this, an approximation for derivative term of the desired joint torque is established using a low-pass filter with a time constant selected carefully such that a feasible dynamic response is ensured.The results show the effectiveness of the proposed controller.


2021 ◽  
Author(s):  
Hayder F.N. Al-Shuka ◽  
Burkhard Corves ◽  
Ehab N. Abbas

Abstract This work deals with control of rigid link robotic manipulators provided with flexible joints. Due to presence of flexible joint dynamics, additional degrees of freedom and underactuation are developed that would complicate the control design. Besides, model uncertainties, unmodeled dynamics and disturbances should be considered in robot modeling and control. Therefore, this paper proposes a cascade position-torque control strategy based on function approximation technique (FAT). The key idea is to design two nested loops: 1) an outer position control loop for tracking reference trajectory, and 2) an inner joint torque control loop to track the desired joint torque resulted from the outer position loop. The torque control loop makes the robot system more adaptable and compliant for sudden disturbances. It increases the perception capability for the target robot mechanisms. Adaptive approximation control (AAC) is used as a strong tool for dealing with time-varying uncertain parameters and disturbances. A sliding mode term is easily integrated with control law structure; however, a constraint on feedback gains are established for compensating modeling (approximation) error. The proposed control architecture can be easily used for high degrees of freedom robotic system due to the decentralized behavior of the AAC. A two-link manipulator is used for simulation experiments.The simulated robot is commanded to move from rest to desired step references considering three cases depending on the selected value of the sliding mode time constant. It is shown that selection of a large time constant parameter related to the position loop leads to slow response. Besides, one of the inherent issues associated with the inner torque control is the presence of derivative of desired joint torque that makes the input control abruptly jumping at the beginning of the dynamic response. To end this, an approximation for derivative term of the desired joint torque is established using a low-pass filter with a time constant selected carefully such that a feasible dynamic response is ensured.The results show the effectiveness of the proposed controller.


Machines ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 372
Author(s):  
Iván González Castillo ◽  
Igor Loboda ◽  
Juan Luis Pérez Ruiz

The lack of gas turbine field data, especially faulty engine data, and the complexity of fault embedding into gas turbines on test benches cause difficulties in representing healthy and faulty engines in diagnostic algorithms. Instead, different gas turbine models are often used. The available models fall into two main categories: physics-based and data-driven. Given the models’ importance and necessity, a variety of simulation tools were developed with different levels of complexity, fidelity, accuracy, and computer performance requirements. Physics-based models constitute a diagnostic approach known as Gas Path Analysis (GPA). To compute fault parameters within GPA, this paper proposes to employ a nonlinear data-driven model and the theory of inverse problems. This will drastically simplify gas turbine diagnosis. To choose the best approximation technique of such a novel model, the paper employs polynomials and neural networks. The necessary data were generated in the GasTurb software for turboshaft and turbofan engines. These input data for creating a nonlinear data-driven model of fault parameters cover a total range of operating conditions and of possible performance losses of engine components. Multiple configurations of a multilayer perceptron network and polynomials are evaluated to find the best data-driven model configurations. The best perceptron-based and polynomial models are then compared. The accuracy achieved by the most adequate model variation confirms the viability of simple and accurate models for estimating gas turbine health conditions.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Saira Javed ◽  
F. H. H. Al Mukahal

This research is based on higher-order shear deformation theory to analyse the free vibration of composite annular circular plates using the spline approximation technique. Equilibrium equations are derived, and differential equations in terms of displacement and rotational functions are obtained. Cubic or quantic spline is used to approximate the displacement and rotational functions depending upon the order of these functions. A generalized eigenvalue problem is obtained and solved numerically for eigenfrequency parameter and associated eigenvector of spline coefficients. Frequency of annular circular plates with different numbers of layers with each layer consisting of different materials is analysed. The effect of geometric and material parameters on frequency value is investigated for simply supported condition. A comparative study with existing results narrates the validity of the present results. Graphs and tables depict the obtained results. Some figures and graphs are drawn by using Autodesk Maya and Matlab software.


2021 ◽  
Author(s):  
◽  
Yiming Peng

<p>Reinforcement Learning (RL) problems appear in diverse real-world applications and are gaining substantial attention in academia and industry. Policy Direct Search (PDS) is widely recognized as an effective approach to RL problems. However, existing PDS algorithms have some major limitations. First, many step-wise Policy Gradient Search (PGS) algorithms cannot effectively utilize informative historical gradients to accurately estimate policy gradients. Second, although evolutionary PDS algorithms do not rely on accurate policy gradient estimations and can explore learning environments effectively, they are not sample efficient at learning policies in the form of deep neural networks. Third, existing PGS algorithms often diverge easily due to the lack of reliable and flexible techniques for value function learning. Fourth, existing PGS algorithms have not provided suitable mechanisms to learn proper state features automatically.  To address these limitations, the overall goal of this thesis is to develop effective policy direct search algorithms for tackling challenging RL problems through technical innovations in four key areas. First, the thesis aims to improve the accuracy of policy gradient estimation by utilizing historical gradients through a Primal-Dual Approximation technique. Second, the thesis targets on surpassing the state-of-the-art performance by properly balancing the exploration-exploitation trade-off via Covariance Matrix Adaption Evolutionary Strategy (CMA-ES) and Proximal Policy Optimization (PPO). Third, the thesis seeks to stabilize value function learning via a self-organized Sandpile Model (SM) meanwhile generalize the compatible condition to support flexible value function learning. Fourth, the thesis endeavors to develop innovative evolutionary feature learning techniques that are capable of automatically extracting useful state features so as to enhance various cutting-edge PGS algorithms.  In the thesis, we explore the four key technical areas by studying policies with increasing complexity. First of all, we start the research from a simple linear policy representation, and then proceed to a complex neural network based policy representation. Next, we consider a more complicated situation where policy learning is coupled with a value function learning. Subsequently, we consider policies modeled as a concatenation of two interrelated networks, one for feature learning and one for action selection.  To achieve the first goal, this thesis proposes a new policy gradient learning framework where a series of historical gradients are jointly exploited to obtain accurate policy gradient estimations via the Primal-Dual Approximation technique. Under the framework, three new PGS algorithms for step-wise policy training have been derived from three widely used PGS algorithms; meanwhile, the convergence properties of these new algorithms have been theoretically analyzed. The empirical results on several benchmark control problems further show that the newly proposed algorithms can significantly outperform their base algorithms.  To achieve the second goal, this thesis develops a new sample efficient evolutionary deep policy optimization algorithm based on CMA-ES and PPO. The algorithm has a layer-wise learning mechanism to improve computational efficiency in comparison to CMA-ES. Additionally, it uses a performance lower bound based surrogate model for fitness evaluation to significantly reduce the sample cost to the state-of-the-art level. More importantly, the best policy found by CMA-ES at every generation is further improved by PPO to properly balance exploration and exploitation. The experimental results confirm that the proposed algorithm outperforms various cutting-edge algorithms on many benchmark continuous control problems.  To achieve the third goal, this thesis develops new value function learning methods that are both reliable and flexible so as to further enhance the effectiveness of policy gradient search. Two Actor-Critic (AC) algorithms have been successfully developed from a commonly-used PGS algorithm, i.e., Regular Actor-Critic (RAC). The first algorithm adopts SM to stabilize value function learning, and the second algorithm generalizes the logarithm function used by the compatible condition to provide a flexible family of new compatible functions. The experimental results show that, with the help of reliable and flexible value function learning, the newly developed algorithms are more effective than RAC on several benchmark control problems.  To achieve the fourth goal, this thesis develops innovative NeuroEvolution algorithms for automated feature learning to enhance various cutting-edge PGS algorithms. The newly developed algorithms not only can extract useful state features but also learn good policies. The experimental analysis demonstrates that the newly proposed algorithms can achieve better performance on large-scale RL problems in comparison to both well-known PGS algorithms and NeuroEvolution techniques. Our experiments also confirm that the state features learned by NeuroEvolution on one RL task can be easily transferred to boost learning performance on similar but different tasks.</p>


2021 ◽  
Author(s):  
◽  
Yiming Peng

<p>Reinforcement Learning (RL) problems appear in diverse real-world applications and are gaining substantial attention in academia and industry. Policy Direct Search (PDS) is widely recognized as an effective approach to RL problems. However, existing PDS algorithms have some major limitations. First, many step-wise Policy Gradient Search (PGS) algorithms cannot effectively utilize informative historical gradients to accurately estimate policy gradients. Second, although evolutionary PDS algorithms do not rely on accurate policy gradient estimations and can explore learning environments effectively, they are not sample efficient at learning policies in the form of deep neural networks. Third, existing PGS algorithms often diverge easily due to the lack of reliable and flexible techniques for value function learning. Fourth, existing PGS algorithms have not provided suitable mechanisms to learn proper state features automatically.  To address these limitations, the overall goal of this thesis is to develop effective policy direct search algorithms for tackling challenging RL problems through technical innovations in four key areas. First, the thesis aims to improve the accuracy of policy gradient estimation by utilizing historical gradients through a Primal-Dual Approximation technique. Second, the thesis targets on surpassing the state-of-the-art performance by properly balancing the exploration-exploitation trade-off via Covariance Matrix Adaption Evolutionary Strategy (CMA-ES) and Proximal Policy Optimization (PPO). Third, the thesis seeks to stabilize value function learning via a self-organized Sandpile Model (SM) meanwhile generalize the compatible condition to support flexible value function learning. Fourth, the thesis endeavors to develop innovative evolutionary feature learning techniques that are capable of automatically extracting useful state features so as to enhance various cutting-edge PGS algorithms.  In the thesis, we explore the four key technical areas by studying policies with increasing complexity. First of all, we start the research from a simple linear policy representation, and then proceed to a complex neural network based policy representation. Next, we consider a more complicated situation where policy learning is coupled with a value function learning. Subsequently, we consider policies modeled as a concatenation of two interrelated networks, one for feature learning and one for action selection.  To achieve the first goal, this thesis proposes a new policy gradient learning framework where a series of historical gradients are jointly exploited to obtain accurate policy gradient estimations via the Primal-Dual Approximation technique. Under the framework, three new PGS algorithms for step-wise policy training have been derived from three widely used PGS algorithms; meanwhile, the convergence properties of these new algorithms have been theoretically analyzed. The empirical results on several benchmark control problems further show that the newly proposed algorithms can significantly outperform their base algorithms.  To achieve the second goal, this thesis develops a new sample efficient evolutionary deep policy optimization algorithm based on CMA-ES and PPO. The algorithm has a layer-wise learning mechanism to improve computational efficiency in comparison to CMA-ES. Additionally, it uses a performance lower bound based surrogate model for fitness evaluation to significantly reduce the sample cost to the state-of-the-art level. More importantly, the best policy found by CMA-ES at every generation is further improved by PPO to properly balance exploration and exploitation. The experimental results confirm that the proposed algorithm outperforms various cutting-edge algorithms on many benchmark continuous control problems.  To achieve the third goal, this thesis develops new value function learning methods that are both reliable and flexible so as to further enhance the effectiveness of policy gradient search. Two Actor-Critic (AC) algorithms have been successfully developed from a commonly-used PGS algorithm, i.e., Regular Actor-Critic (RAC). The first algorithm adopts SM to stabilize value function learning, and the second algorithm generalizes the logarithm function used by the compatible condition to provide a flexible family of new compatible functions. The experimental results show that, with the help of reliable and flexible value function learning, the newly developed algorithms are more effective than RAC on several benchmark control problems.  To achieve the fourth goal, this thesis develops innovative NeuroEvolution algorithms for automated feature learning to enhance various cutting-edge PGS algorithms. The newly developed algorithms not only can extract useful state features but also learn good policies. The experimental analysis demonstrates that the newly proposed algorithms can achieve better performance on large-scale RL problems in comparison to both well-known PGS algorithms and NeuroEvolution techniques. Our experiments also confirm that the state features learned by NeuroEvolution on one RL task can be easily transferred to boost learning performance on similar but different tasks.</p>


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2284
Author(s):  
Xuemiao Chen ◽  
Ziwen Wu ◽  
Jing Li ◽  
Qianjin Zhao

In this paper, for a class of uncertain stochastic nonlinear systems with input time-varying delays, an adaptive neural dynamic surface control (DSC) method is proposed. To approximate the unknown continuous functions online, the neural network approximation technique was applied, and based on the DSC scheme, the desired controller was constructed. A compensation system is presented to compensate for the effect of the input delay. The Lyapunov–Krasovskii functionals (LKFs) were employed to compensate for the effect of the state delay. Compared with the existing works, based on using the DSC scheme with the nonlinear filter and stochastic Barbalat’s lemma, the asymptotic regulation performance of this closed-loop system can be guaranteed under the developed controller. To certify the availability for the designed control method, some simulation results are presented.


Robotica ◽  
2021 ◽  
pp. 1-19
Author(s):  
Brahim Brahmi ◽  
Maarouf Saad ◽  
Claude El-Bayeh ◽  
Mohammad Habibur Rahman ◽  
Abdelkrim Brahmi

Abstract In this paper, a new adaptive control strategy, based on the Modified Function Approximation Technique, is proposed for a manipulator robot with unknown dynamics. This novel strategy benefits from the backstepping control approach and the use of state and output feedback. Unlike the conventional Function Approximation Technique approach, the use of basis functions to approximate the dynamic parameters is completely eliminated in the proposed scheme. Another improvement is eliminating the need to measure velocity by means of integrating a high-order sliding mode observer. Furthermore, utilizing the Lyapunov function theory, it is demonstrated that all controller signals are uniformly ultimately bounded in the closed-loop form. Lastly, simulation and comparative studies are carried out to validate the effectiveness of the proposed control approach.


Sign in / Sign up

Export Citation Format

Share Document