scholarly journals Visual delay affects force scaling and weight perception when lifting objects in virtual reality: Supplemental model code

2018 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. Here we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants' movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.



2006 ◽  
Vol 5-6 ◽  
pp. 55-62
Author(s):  
I.A. Jones ◽  
A.A. Becker ◽  
A.T. Glover ◽  
P. Wang ◽  
S.D. Benford ◽  
...  

Boundary element (BE) analysis is well known as a tool for assessing the stiffness and strength of engineering components, but, along with finite element (FE) techniques, it is also finding new applications as a means of simulating the behaviour of deformable objects within virtual reality simulations since it exploits precisely the same kind of surface-only definition used for visual rendering of three-dimensional solid objects. This paper briefly reviews existing applications of BE and FE within virtual reality, and describes recent work on the BE-based simulation of aspects of surgical operations on the brain, making use of commercial hand-held force-feedback interfaces (haptic devices) to measure the positions of the virtual surgical tools and provide tactile feedback to the user. The paper presents an overview of the project then concentrates on recent developments, including the incorporation of simulated tumours in the virtual brain.



2019 ◽  
Vol 27 (3) ◽  
pp. 9-21
Author(s):  
A.E. Voiskounsky

The paper relates to the branch of cyberpsychology associated with risk factors during immersion in a virtual environment. Specialists in the development and operation of virtual reality systems know that immersion into this environment may be accompanied by symptoms similar to the “motion sickness” of transport vehicle passengers (ships, aircraft, cars). In the paper, these conditions are referred to as a cybersickness (or, cyberdisease). The three leading theories, proposed as an explanation of the causes of cybersickness, are discussed: the theory of sensory conflict, the theory of postural instability (the inability to maintain equilibrium), and the evolutionary (aka toxin) theory. A frequent occurrence of symptoms of cybersickness is a conflict between visual signals and signals from the vestibular system. It is shown that such conflicts can be stimulated in the framework of a specially organized experiment (e.g., the illusion of out-of-body experience) using virtual reality systems. When competing signals (visual, auditory, kinesthetic, tactile, etc.) reach the brain, the data gained with the use of virtual reality systems give a chance to hypothetically determine the localization of the specific area in the brain that ensures the integration of multisensory stimuli.



Author(s):  
Yuntao Wang ◽  
Zichao (Tyson) Chen ◽  
Hanchuan Li ◽  
Zhengyi Cao ◽  
Huiyi Luo ◽  
...  


2021 ◽  
Vol 11 (7) ◽  
pp. 2987
Author(s):  
Takumi Okumura ◽  
Yuichi Kurita

Image therapy, which creates illusions with a mirror and a head mount display, assists movement relearning in stroke patients. Mirror therapy presents the movement of the unaffected limb in a mirror, creating the illusion of movement of the affected limb. As the visual information of images cannot create a fully immersive experience, we propose a cross-modal strategy that supplements the image with sensual information. By interacting with the stimuli received from multiple sensory organs, the brain complements missing senses, and the patient experiences a different sense of motion. Our system generates the sense of stair-climbing in a subject walking on a level floor. The force sensation is presented by a pneumatic gel muscle (PGM). Based on motion analysis in a human lower-limb model and the characteristics of the force exerted by the PGM, we set the appropriate air pressure of the PGM. The effectiveness of the proposed system was evaluated by surface electromyography and a questionnaire. The experimental results showed that by synchronizing the force sensation with visual information, we could match the motor and perceived sensations at the muscle-activity level, enhancing the sense of stair-climbing. The experimental results showed that the visual condition significantly improved the illusion intensity during stair-climbing.



2021 ◽  
Author(s):  
Shachar Sherman ◽  
Koichi Kawakami ◽  
Herwig Baier

The brain is assembled during development by both innate and experience-dependent mechanisms1-7, but the relative contribution of these factors is poorly understood. Axons of retinal ganglion cells (RGCs) connect the eye to the brain, forming a bottleneck for the transmission of visual information to central visual areas. RGCs secrete molecules from their axons that control proliferation, differentiation and migration of downstream components7-9. Spontaneously generated waves of retinal activity, but also intense visual stimulation, can entrain responses of RGCs10 and central neurons11-16. Here we asked how the cellular composition of central targets is altered in a vertebrate brain that is depleted of retinal input throughout development. For this, we first established a molecular catalog17 and gene expression atlas18 of neuronal subpopulations in the retinorecipient areas of larval zebrafish. We then searched for changes in lakritz (atoh7-) mutants, in which RGCs do not form19. Although individual forebrain-expressed genes are dysregulated in lakritz mutants, the complete set of 77 putative neuronal cell types in thalamus, pretectum and tectum are present. While neurogenesis and differentiation trajectories are overall unaltered, a greater proportion of cells remain in an uncommitted progenitor stage in the mutant. Optogenetic stimulation of a pretectal area20,21 evokes a visual behavior in blind mutants indistinguishable from wildtype. Our analysis shows that, in this vertebrate visual system, neurons are produced more slowly, but specified and wired up in a proper configuration in the absence of any retinal signals.



2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.



2011 ◽  
Vol 106 (4) ◽  
pp. 1862-1874 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.



2004 ◽  
Vol 43 (1) ◽  
pp. 85-98 ◽  
Author(s):  
Jonathan D. French ◽  
James H. Mutti ◽  
Satish S. Nair ◽  
Michael Prewitt


Author(s):  
L Kotek ◽  
Z Tuma ◽  
P Blecha ◽  
Z Nemcova ◽  
P Habada


Sign in / Sign up

Export Citation Format

Share Document