scholarly journals Optimal low-level control strategies for a high-performance hybrid electric power unit

2020 ◽  
Vol 276 ◽  
pp. 115248 ◽  
Author(s):  
Camillo Balerna ◽  
Nicolas Lanzetti ◽  
Mauro Salazar ◽  
Alberto Cerofolini ◽  
Christopher Onder
Author(s):  
Erik Chumacero-Polanco ◽  
James Yang

Abstract People who have suffered a transtibial amputation show diminished ambulation and impaired quality of life. Powered ankle foot prostheses (AFP) are used to recover some mobility of transtibial amputees (TTAs). Powered AFP is an emerging technology that has great potential to improve the quality of life of TTAs with important avenues for research and development in different fields. This paper presents a survey on sensing systems and control strategies applied to powered AFPs. Sensing kinematic and kinetic information in powered AFPs is critical for control. Ankle angle position is commonly obtained via potentiometers and encoders directly installed on the joint, velocities can be estimated using numerical differentiators, and accelerations are normally obtained via inertial measurement units (IMUs). On the other hand, kinetic information is usually obtained via strain gauges and torque sensors. On the other hand, control strategies are classified as high- and low-level control. The high-level control sets the torque or position references based on pattern generators, user’s intent of motion recognition, or finite-state machine. The low-level control usually consists of linear controllers that drive the ankle’s joint position, velocity, or torque to follow an imposed reference signal. The most widely used control strategy is the one based on finite-state machines for the high-level control combined with a proportional-derivative torque control for low-level. Most designs have been experimentally assessed with acceptable results in terms of walking speed. However, some drawbacks related to powered AFP’s weight and autonomy remain to be overcome. Future research should be focused on reducing powered AFP size and weight, increasing energy efficiency, and improving both the high- and the low-level controllers in terms of efficiency and performance.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-24
Author(s):  
Siddharth Mysore ◽  
Bassel Mabsout ◽  
Kate Saenko ◽  
Renato Mancuso

We focus on the problem of reliably training Reinforcement Learning (RL) models (agents) for stable low-level control in embedded systems and test our methods on a high-performance, custom-built quadrotor platform. A common but often under-studied problem in developing RL agents for continuous control is that the control policies developed are not always smooth. This lack of smoothness can be a major problem when learning controllers as it can result in control instability and hardware failure. Issues of noisy control are further accentuated when training RL agents in simulation due to simulators ultimately being imperfect representations of reality—what is known as the reality gap . To combat issues of instability in RL agents, we propose a systematic framework, REinforcement-based transferable Agents through Learning (RE+AL), for designing simulated training environments that preserve the quality of trained agents when transferred to real platforms. RE+AL is an evolution of the Neuroflight infrastructure detailed in technical reports prepared by members of our research group. Neuroflight is a state-of-the-art framework for training RL agents for low-level attitude control. RE+AL improves and completes Neuroflight by solving a number of important limitations that hindered the deployment of Neuroflight to real hardware. We benchmark RE+AL on the NF1 racing quadrotor developed as part of Neuroflight. We demonstrate that RE+AL significantly mitigates the previously observed issues of smoothness in RL agents. Additionally, RE+AL is shown to consistently train agents that are flight capable and with minimal degradation in controller quality upon transfer. RE+AL agents also learn to perform better than a tuned PID controller, with better tracking errors, smoother control, and reduced power consumption. To the best of our knowledge, RE+AL agents are the first RL-based controllers trained in simulation to outperform a well-tuned PID controller on a real-world controls problem that is solvable with classical control.


2018 ◽  
Vol 5 (5) ◽  
pp. 33-38
Author(s):  
Konstantin L. KOVALEV ◽  
◽  
Vladimir T. PENKIN ◽  
Valery S. SEMENIKHIN ◽  
Yekaterina Ye TULINOVA ◽  
...  

1993 ◽  
Vol 28 (11-12) ◽  
pp. 531-538 ◽  
Author(s):  
B. Teichgräber

A nitrification/denitrification process was applied to reject water treatment from sludge dewatering at Bottrop central sludge treatment facilities of the Emschergenossenschaft. On-line monitoring of influent and effluent turbidity, closed loop control of DO and pH, and on-line monitoring of nitrogen compounds were combined to a three level control pattern. Though on-line measurement of substrate and product showed substantial response time it could be used to operate nitrification/denitrification within process boundaries.


Author(s):  
Breno A. de Melo Menezes ◽  
Nina Herrmann ◽  
Herbert Kuchen ◽  
Fernando Buarque de Lima Neto

AbstractParallel implementations of swarm intelligence algorithms such as the ant colony optimization (ACO) have been widely used to shorten the execution time when solving complex optimization problems. When aiming for a GPU environment, developing efficient parallel versions of such algorithms using CUDA can be a difficult and error-prone task even for experienced programmers. To overcome this issue, the parallel programming model of Algorithmic Skeletons simplifies parallel programs by abstracting from low-level features. This is realized by defining common programming patterns (e.g. map, fold and zip) that later on will be converted to efficient parallel code. In this paper, we show how algorithmic skeletons formulated in the domain specific language Musket can cope with the development of a parallel implementation of ACO and how that compares to a low-level implementation. Our experimental results show that Musket suits the development of ACO. Besides making it easier for the programmer to deal with the parallelization aspects, Musket generates high performance code with similar execution times when compared to low-level implementations.


Sign in / Sign up

Export Citation Format

Share Document