scholarly journals Influence of Attachment Pressure and Kinematic Configuration on pHRI with Wearable Robots

2009 ◽  
Vol 6 (2) ◽  
pp. 157-173 ◽  
Author(s):  
André Schiele ◽  
Frans C. T. van der Helm

The goal of this paper is to show the influence of exoskeleton attachment, such as the pressure on the fixation cuffs and alignment of the robot joint to the human joint, on subjective and objective performance metrics (i.e. comfort, mental load, interface forces, tracking error and available workspace) during a typical physical human-robot interaction (pHRI) experiment. A mathematical model of a single degree of freedom interaction between humans and a wearable robot is presented and used to explain the causes and characteristics of interface forces between the two. The pHRI model parameters (real joint offsets, attachment stiffness) are estimated from experimental interface force measurements acquired during tests with 14 subjects. Insights gained by the model allow optimisation of the exoskeleton kinematics. This paper shows that offsets of more than ±10 cm exist between human and robot axes of rotation, even if a well-designed exoskeleton is aligned properly before motion. Such offsets can create interface loads of up to 200 N and 1.5 Nm in the absence of actuation. The optimal attachment pressure is determined to be 20 mmHg and the attachment stiffness is about 300 N/m. Inclusion of passive compensation joints in the exoskeleton is shown to lower the interaction forces significantly, which enables a more ergonomic pHRI.

Robotica ◽  
1995 ◽  
Vol 13 (1) ◽  
pp. 11-18 ◽  
Author(s):  
M. M. Bridges ◽  
J. Cai ◽  
D. M. Dawson ◽  
M. T. Grabbe

SummaryIn this paper we present the results obtained from the implementation of a robust position/force controller on a two-degree-of-freedom direct drive robot. The controller is based on the theoretical work presented in references 1 & 2, which guarantees Globally Uniformly Ultimately Bounded (GUUB) position tracking error and bounded force tracking error. The controller accomplishes this stability result in spite of robot model uncertainty and only requires; joint position and velocity measurements, end-effector force measurements, and bounds on the model parameters. Experimental results described in this paper serve to verify the theoretical claims.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 543
Author(s):  
Alejandra Ríos ◽  
Eusebio E. Hernández ◽  
S. Ivvan Valdez

This paper introduces a two-stage method based on bio-inspired algorithms for the design optimization of a class of general Stewart platforms. The first stage performs a mono-objective optimization in order to reach, with sufficient dexterity, a regular target workspace while minimizing the elements’ lengths. For this optimization problem, we compare three bio-inspired algorithms: the Genetic Algorithm (GA), the Particle Swarm Optimization (PSO), and the Boltzman Univariate Marginal Distribution Algorithm (BUMDA). The second stage looks for the most suitable gains of a Proportional Integral Derivative (PID) control via the minimization of two conflicting objectives: one based on energy consumption and the tracking error of a target trajectory. To this effect, we compare two multi-objective algorithms: the Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) and Non-dominated Sorting Genetic Algorithm-III (NSGA-III). The main contributions lie in the optimization model, the proposal of a two-stage optimization method, and the findings of the performance of different bio-inspired algorithms for each stage. Furthermore, we show optimized designs delivered by the proposed method and provide directions for the best-performing algorithms through performance metrics and statistical hypothesis tests.


Author(s):  
Vincent Aloi ◽  
Caroline Black ◽  
Caleb Rucker

Parallel continuum robots can provide compact, compliant manipulation of tools in robotic surgery and larger-scale human robot interaction. In this paper we address stiffness control of parallel continuum robots using a general nonlinear kinetostatic modeling framework based on Cosserat rods. We use a model formulation that estimates the applied end-effector force and pose using actuator force measurements. An integral control approach then modifies the commanded target position based on the desired stiffness behavior and the estimated force and position. We then use low-level position control of the actuators to achieve the modified target position. Experimental results show that after calibration of a single model parameter, the proposed approach achieves accurate stiffness control in various directions and poses.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 21 ◽  
Author(s):  
Zhanat Makhataeva ◽  
Huseyin Varol

Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years.


Author(s):  
Tong Wei ◽  
Yu-Feng Li

Large-scale multi-label learning (LMLL) aims to annotate relevant labels from a large number of candidates for unseen data. Due to the high dimensionality in both feature and label spaces in LMLL, the storage overheads of LMLL models are often costly. This paper proposes a POP (joint label and feature Parameter OPtimization) method. It tries to filter out redundant model parameters to facilitate compact models. Our key insights are as follows. First, we investigate labels that have little impact on the commonly used LMLL performance metrics and only preserve a small number of dominant parameters for these labels. Second, for the remaining influential labels, we reduce spurious feature parameters that have little contribution to the generalization capability of models, and preserve parameters for only discriminative features. The overall problem is formulated as a constrained optimization problem pursuing minimal model size. In order to solve the resultant difficult optimization, we show that a relaxation of the optimization can be efficiently solved using binary search and greedy strategies. Experiments verify that the proposed method clearly reduces the model size compared to state-of-the-art LMLL approaches, in addition, achieves highly competitive performance.


2012 ◽  
Vol 25 (0) ◽  
pp. 91 ◽  
Author(s):  
Li Wong ◽  
Georg Meyer ◽  
Emma Timson ◽  
Philip Perfect ◽  
Mark White

There is interest in how pilots perceive simulator fidelity and rate self-performance in virtual reality flight training. Ten participants were trained to perform a target tracking task in a helicopter flight simulation. After training objective performance, the median tracking error, was compared to subjective self-evaluations in a number of flying conditions where the cues available to our pilots were manipulated in a factorial design: the simulator motion platform could be active or static, audio cues signalling the state of the turbine could be those used during training, non-informative, or an obviously different but informative ‘novel’ sound. We tested participants under hard and easy flying conditions. Upon completion of each test condition, participants completed a 12-statement Likert-scale with items concerning their performance and helicopter simulator fidelity. Objective performance measures show that flight performance improved during training and was affected by audio and motion cues. The subjective data shows that participants reliably self-evaluated their own performance and simulator fidelity. However, there were instances where subjective and objective measures of performance or fidelity did not correlate. For example, although participants rated the ‘novel’ turbine sound as having low fidelity, it behaviourally caused no difference with respect to the turbine sound used in training. They were also unable to self-evaluate outcome of learning. We conclude that whilst subjective measures are a good indicator of self-performance, objective data offers a valuable task-oriented perspective on simulator fidelity.


2020 ◽  
Author(s):  
Jonas Sukys ◽  
Marco Bacci

<div> <div>SPUX (Scalable Package for Uncertainty Quantification in "X") is a modular framework for Bayesian inference and uncertainty quantification. The SPUX framework aims at harnessing high performance scientific computing to tackle complex aquatic dynamical systems rich in intrinsic uncertainties,</div> <div>such as ecological ecosystems, hydrological catchments, lake dynamics, subsurface flows, urban floods, etc. The challenging task of quantifying input, output and/or parameter uncertainties in such stochastic models is tackled using Bayesian inference techniques, where numerical sampling and filtering algorithms assimilate prior expert knowledge and available experimental data. The SPUX framework greatly simplifies uncertainty quantification for realistic computationally costly models and provides an accessible, modular, portable, scalable, interpretable and reproducible scientific workflow. To achieve this, SPUX can be coupled to any serial or parallel model written in any programming language (e.g. Python, R, C/C++, Fortran, Java), can be installed either on a laptop or on a parallel cluster, and has built-in support for automatic reports, including algorithmic and computational performance metrics. I will present key SPUX concepts using a simple random walk example, and showcase recent realistic applications for catchment and lake models. In particular, uncertainties in model parameters, meteorological inputs, and data observation processes are inferred by assimilating available in-situ and remotely sensed datasets.</div> </div>


Author(s):  
Tomislav Bacek ◽  
Marta Moltedo ◽  
Kevin Langlois ◽  
Guillermo Asin Prieto ◽  
Maria Carmen Sanchez-Villamanan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document