scholarly journals Haptic feedback through lower limbs exoskeleton for rehabilitation robotics with virtual reality

Author(s):  
Ícaro Ostan
Author(s):  
Benjamin Williams ◽  
Alexandra E. Garton ◽  
Christopher J. Headleand

2006 ◽  
Vol 10 (1) ◽  
pp. 24-30 ◽  
Author(s):  
David Swapp ◽  
Vijay Pawar ◽  
Céline Loscos

Stroke ◽  
2016 ◽  
Vol 47 (suppl_1) ◽  
Author(s):  
Yunju Lee ◽  
Kai Chen ◽  
Richard L Harvey ◽  
Elliot J Roth ◽  
Li-Qun Zhang

Introduction: Stroke survivors develop substantial disability such as weakness, spasticity, increased stiffness, and reduced range of motion in lower limbs, contributing to reduced quality of life. It is important to stretch impaired ankle and knee to increase range of motion and reduce spasticity, and to conduct active movement training to improve balance and locomotion. Hypothesis: We addressed the hypotheses that robot-aided ankle and knee training will reduce motor impairments and improve balance and gait functions, and the improvements will maintain to the 6-weeks follow-up. Methods: Seven male stroke survivors participated in the robot-guided ankle and knee rehabilitation training using a pair of ankle and knee rehabilitation robots over 18 training sessions (3 sessions/week for 6 weeks). Three evaluations were done before and after training, and 6 weeks follow-up. Each session involved passive stretching under intelligent control and active movement training under real-time, audiovisual and haptic feedback. About equal time was spent on the ankle and knee training. Results: We found significant improvement in 6-Minute Walk Test (6MWT: 294.8 m pre-training to 386.4 m post training; p<0.01), Berg Balance Scale (BBS, 45 pre to 52 post; p<0.05), ankle active range of motion (AROM) (-11.7° pre to 1.7° post; p<0.05, a negative value means not being able to reach 0° dorsiflexion), passive ROM in dorsiflexion (12.7° pre to 23.3° post; p<0.01), and dorsiflexion muscle strength (-0.3 Nm pre to 5.7 Nm post; p<0.05, negative means lower than the passive torque at 0° ankle dorsiflexion). The knee had significant improvement in AROM in extension against the load of the robot (34.8° pre to 15.9° knee flexion post; p<0.05) and maximal flexion strength at 90° knee flexion (19.3 Nm pre to 31.7 Nm post; p<0.01). At the follow-up, the outcomes were found as similar results of post evaluation, e.g., 379m (p<0.05) in 6MWT, 51 (p<0.05) in BBS, and 5.2 Nm (p=0.05) in dorsiflexion strength. Conclusions: In conclusions, robot-guided stretching and active movement training reduced impairments at the knee and ankle of stroke survivors resulting in improved mobility. Furthermore, the effect of training was maintained at the 6-weeks follow-up after the treatment.


2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.


Author(s):  
Yuwei Li ◽  
David Donghyun Kim ◽  
Brian Anthony

Abstract We present HapticWall, an encountered-type, motor actuated vertical two-dimensional system that enables both small and large scale physical interactions in virtual reality. HapticWall consists of a motor-actuated vertical two-dimensional gantry system that powers the physical proxy for the virtual counterpart. The physical proxy, combined with the HapticWall system, can be used to provide both small and large scale haptic feedbacks for virtual reality in the vertical space. Haptic Wall is capable of providing wall-like haptic feedback and interactions in the vertical space. We created two virtual reality applications to demonstrate the application of the HapticWall system. Preliminary user feedback was collected to evaluate the performance and the limitations of the HapticWall system. The results of our study are presented in this paper. The outcome of this research will provide better understanding of multi-scale haptic interfaces in the vertical space for virtual reality and guide the future development of the HapticWall system.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.


Sign in / Sign up

Export Citation Format

Share Document