scholarly journals A brief glimpse at a haptic target is sufficient for multisensory integration in reaching movements

2020 ◽  
Author(s):  
Ivan Camponogara ◽  
Robert Volcic

AbstractGoal-directed aiming movements toward visuo-haptic targets (i.e., seen and handheld targets) are generally more precise than those toward visual only or haptic only targets. This multisensory advantage stems from a continuous inflow of haptic and visual target information during the movement planning and execution phases. However, in everyday life, multisensory movements often occur without the support of continuous visual information. Here we investigated whether and to what extent limiting visual information to the initial stage of the action still leads to a multisensory advantage. Participants were asked to reach a handheld target while vision was briefly provided during the movement planning phase (50 ms, 100 ms, 200 ms of vision before movement onset), or during the planning and early execution phases (400 ms of vision), or during the entire movement. Additional conditions were performed in which only haptic target information was provided, or, only vision was provided either briefly (50 ms, 100 ms, 200 ms, 400 ms) or throughout the entire movement. Results showed that 50 ms of vision before movement onset were sufficient to trigger a direction-specific visuo-haptic integration process that increased movement precision. We conclude that, when a continuous support of vision is not available, movement precision is determined by the less recent, but most reliable multisensory information rather than by the latest unisensory (haptic) inputs.

2021 ◽  
Author(s):  
Mayu Yamada ◽  
Hirono Ohashi ◽  
Koh Hosoda ◽  
Daisuke Kurabayashi ◽  
Shunsuke Shigaki

Most animals survive and thrive due to navigation behavior to reach their destinations. In order to navigate, it is important for animals to integrate information obtained from multisensory inputs and use that information to modulate their behavior. In this study, by using a virtual reality (VR) system for an insect, we investigated how an adult silkmoth integrates visual and wind direction information during female search behavior (olfactory behavior). According to the behavioral experiments using the VR system, the silkmoth had the highest navigation success rate when odor, vision, and wind information were correctly provided. However, we found that the success rate of the search signifcantly reduced if wind direction information was provided that was incorrect from the direction actually detected. This indicates that it is important to acquire not only odor information, but also wind direction information correctly. In other words, Behavior was modulated by the degree of co-incidence between the direction of arrival of the odor and the direction of arrival of the wind, and posture control (angular velocity control) was modulated by visual information. We mathematically modeled the modulation of behavior using multisensory information and evaluated it by simulation. As a result, the mathematical model not only succeeded in reproducing the actual female search behavior of the silkmoth, but can also improve search success relative to the conventional odor source search algorithm.


2019 ◽  
Vol 16 (5) ◽  
pp. 558-571
Author(s):  
A. V. Belyakova ◽  
B. V. Saveliev

Introduction. Organization of high-quality training of the vehicles’ drivers is possible only with the proper formation of professional skills. Moreover, the formation of the skills is necessary for the driver to control the vehicle safety, perhaps by using simulators at the initial stage of training. The use of simulators allows automating the actions that the driver performs, while not exposing the student to risks.Therefore, the purpose of the paper is to analyze the application of simulators in the training of the vehicles’ drivers.Materials and methods. The paper presented the basic psycho physiological principles of the learning process, which should be taken into account when using simulators for driver training. The authors demonstrated the classification of the car simulators used for training of drivers by the information models. Existing information models of simulators were divided into two groups: reproducing only visual information, without imitation of the vestibular and simulating both visual and vestibular information. The analysis reflected the advantages and disadvantages of information models.Results. As a result, the authors proposed two systematizing features: the view angle of the visual information and the simulation of vestibular information.Discussion and conclusions. The research is useful not only for the further science development, but also for the selection of simulators and for the organization of the educational process in driving schools.


2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 127-127
Author(s):  
M Desmurget ◽  
Y Rossetti ◽  
C Prablanc

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.


Author(s):  
Kei Omata ◽  
Ken Mogi

Language is essentially multi-modal in its sensory origin, the daily conversation depending heavily on the audio-visual (AV) information. Although the perception of spoken language is primarily dominated by audition, the perception of facial expression, particularly that of the mouth, helps us comprehend speech. The McGurk effect is a striking phenomenon where the perceived phoneme is affected by the simultaneous observation of lip movement, and probably reflects the underlying AV integration process. The elucidation of the principles involved in this unique perceptual anomaly poses an interesting problem. Here we study the nature of the McGurk effect by means of neural networks (self-organizing maps, SOM) designed to extract patterns inherent in audio and visual stimuli. It is shown that a McGurk effect-like classification of incoming information occurs without any additional constraint or procedure added to the network, suggesting that the anomaly is a consequence of the AV integration process. Within this framework, an explanation is given for the asymmetric effect of AV pairs in causing the McGurk effect (fusion or combination) based on the ‘distance’ relationship between audio or visual information within the SOM. Our result reveals some generic features of the cognitive process of phoneme perception, and AV sensory integration in general.


2000 ◽  
Vol 44 (21) ◽  
pp. 3-501-3-501
Author(s):  
Jae-Min Park ◽  
Sang-Do Lee ◽  
Young-Sook Kim

Man perceive and react around him through the five senses. Also man give rise to the human sensibility and maintain his emotion. This study doesn't limit working environment to VDT environment, but considers the universal working environment acquiring information by eyesight stimulation. In this respect, designed and made is experimental equipment such as an external light for veiling reflection, visual target suggesting system, and visual target considered luminance contrast level. And reading the visual target is selected as work after excerpting the editorials from daily newspapers in Korean and Chinese letter and making target for experimental condition. In case of forming an abnormal veiling reflection we consider the form; a vertical (25%, 50%, 75%)and a horizontal (25%, 50%, 75%). The results from the subjective evaluation are analyzed by SD (Semantic Differential) methodology of 5 point scale for visibility and nuisance when an abnormal veiling reflection forms on target. In addition, the results of the objective evaluation are suggested by measuring and analyzing EEG (Electroencephalogram) of bio-signal for visual sensitivity. The results of this study can apply to basic data which create a guideline of a visual operation. In particular, it can be designed as an illumination environment concerning an ergonomic factor on visual operations, mental stress such as a visual inspection operation, visual information search operation, etc. As a result, we can expect to reduce the visual nuisance and contribute to the improvement of the performance and the uplift of the competitive power.


Author(s):  
Benjamin Wolfe ◽  
Ben D. Sawyer ◽  
Ruth Rosenholtz

Objective The aim of this study is to describe information acquisition theory, explaining how drivers acquire and represent the information they need. Background While questions of what drivers are aware of underlie many questions in driver behavior, existing theories do not directly address how drivers in particular and observers in general acquire visual information. Understanding the mechanisms of information acquisition is necessary to build predictive models of drivers’ representation of the world and can be applied beyond driving to a wide variety of visual tasks. Method We describe our theory of information acquisition, looking to questions in driver behavior and results from vision science research that speak to its constituent elements. We focus on the intersection of peripheral vision, visual attention, and eye movement planning and identify how an understanding of these visual mechanisms and processes in the context of information acquisition can inform more complete models of driver knowledge and state. Results We set forth our theory of information acquisition, describing the gap in understanding that it fills and how existing questions in this space can be better understood using it. Conclusion Information acquisition theory provides a new and powerful way to study, model, and predict what drivers know about the world, reflecting our current understanding of visual mechanisms and enabling new theories, models, and applications. Application Using information acquisition theory to understand how drivers acquire, lose, and update their representation of the environment will aid development of driver assistance systems, semiautonomous vehicles, and road safety overall.


Author(s):  
Welber Marinovic ◽  
Annaliese M. Plooy ◽  
James R. Tresilian

When intercepting a moving target, accurate timing depends, in part, upon starting to move at the right moment. It is generally believed that this is achieved by triggering motor command generation when a visually perceived quantity such as the target’s time-to-arrival reaches a specific criterion value. An experimental method that could be used to determine the moment when this visual event happens was introduced by Whiting and coworkers in the 1970s, and it involves occluding the vision of the target at different times prior to the time of movement onset (MO). This method is limited because the experimenter has no control over MO time. We suggest a method which provides the needed control by having people make interceptive movements of a specific duration. We tested the efficacy of this method in two experiments in which the accuracy of interception was examined under different occlusion conditions. In the first experiment, we examined the effect of changing the timing of an occlusion period (OP) of fixed duration (200 ms). In the second experiment, we varied the duration of the OP (180–430 ms) as well as its timing. The results demonstrated the utility of the proposed method and showed that performance deteriorated only when the participants had their vision occluded from 200 ms prior to MO. The results of Experiment 2 were able to narrow down the critical interval to trigger the interceptive action to within the period from 200 to 150 ms prior to MO, probably closer to 150 ms. In addition, the results showed that the execution of brief interceptive movements (180 ms) was not affected by the range of OPs used in the experiments. This indicates that the whole movement was prepared in advance and triggered by a visual stimulus event that occurred at about 150 ms before onset.


Sign in / Sign up

Export Citation Format

Share Document