Stochastic Optimization of Impedance Parameters for a Powered Prosthesis Using a 3D Simulation Environment

Author(s):  
Jonathan Camargo ◽  
Krishan Bhakta ◽  
Aaron Young

Developing controllers for powered prostheses is a daunting task that requires involvement from clinicians, patients and robotics experts. Difficulties arise for tuning prosthetic devices that perform in multiple conditions and provide more functionality to the user. For this reason, we propose the implementation of a simulation framework based on the open-source 3D simulation environment Gazebo, to reduce the burden of experimentation and aid in the early stages of development. In this study, we present a minimalist plugin for the simulator that allows the interfacing of a virtual model with the native prosthesis controller and renders the finding of impedance parameters as a pattern search problem. To demonstrate the functionality of this approach, we used the framework to obtain the parameters that offer the most similar joint trajectory to the respective biological counterpart during swing phase for ground level walking. The optimization results are compared against the response of a physical 2DOF knee-ankle prosthesis operating under the optimized parameters, showing congruence to our model-based results. We found that a simulation-based solution is capable of finding parameters that create an emerging behavior that approximates to the kinematic trajectory goals within a tolerance (mean absolute error ∼10%). This provides an appropriate method for development and evaluation of impedance-based controllers before deployment to the physical device.

2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Author(s):  
Samir Kumar Hati ◽  
Nimai Pada Mandal ◽  
Dipankar Sanyal

Losses in control valves drag down the average overall efficiency of electrohydraulic systems to only about 22% from nearly 75% for standard pump-motor sets. For achieving higher energy efficiency in slower systems, direct pump control replacing fast-response valve control is being put in place through variable-speed motors. Despite the promise of a quicker response, displacement control of pumps has seen slower progress for exhibiting undesired oscillation with respect to the demand in some situations. Hence, a mechatronic simulation-based design is taken up here for a variable-displacement pump–controlled system directly feeding a double-acting single-rod cylinder. The most significant innovation centers on designing an axial-piston pump with an electrohydraulic compensator for bi-directional swashing. An accumulator is conceived to handle the flow difference in the two sides across the load piston. A solenoid-driven sequence valve with P control is proposed for charging the accumulator along with setting its initial gas pressure by a feedforward design. Simple proportional–integral–derivative control of the compensator valve is considered in this exploratory study. Appropriate setting of the gains and critical sizing of the compensator has been obtained through a detailed parametric study aiming low integral absolute error. A notable finding of the simulation is the achievement of the concurrent minimum integral absolute error of 3.8 mm s and the maximum energy saving of 516 kJ with respect to a fixed-displacement pump. This is predicted for the combination of the circumferential port width of 2 mm for the compensator valve and the radial clearance of 40 µm between each compensator cylinder and the paired piston.


Algorithms ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 296
Author(s):  
Lucy Blondell ◽  
Mark Z. Kos ◽  
John Blangero ◽  
Harald H. H. Göring

Statistical analysis of multinomial data in complex datasets often requires estimation of the multivariate normal (mvn) distribution for models in which the dimensionality can easily reach 10–1000 and higher. Few algorithms for estimating the mvn distribution can offer robust and efficient performance over such a range of dimensions. We report a simulation-based comparison of two algorithms for the mvn that are widely used in statistical genetic applications. The venerable Mendell-Elston approximation is fast but execution time increases rapidly with the number of dimensions, estimates are generally biased, and an error bound is lacking. The correlation between variables significantly affects absolute error but not overall execution time. The Monte Carlo-based approach described by Genz returns unbiased and error-bounded estimates, but execution time is more sensitive to the correlation between variables. For ultra-high-dimensional problems, however, the Genz algorithm exhibits better scale characteristics and greater time-weighted efficiency of estimation.


2019 ◽  
pp. 298-313
Author(s):  
Jose Maria Cela-Ranilla ◽  
Luis Marqués Molías ◽  
Mercè Gisbert Cervera

This study analyzes the relationship between the use of learning patterns as a grouping criterion to develop learning activities in the 3D simulation environment at University. Participants included 72 Spanish students from the Education and Marketing disciplines. Descriptive statistics and non-parametric tests were conducted. The process was analyzed by means of teamwork measurements and the product was analyzed by assessing the final group performance. Results showed that learning patterns can be an effective criterion for forming work groups, especially when the students do not know each other.


Sign in / Sign up

Export Citation Format

Share Document