scholarly journals Abstract spatial, but not body-related, visual information guides bimanual coordination

2016 ◽  
Author(s):  
Janina Brandes ◽  
Farhad Rezvani ◽  
Tobias Heed

AbstractVisual spatial information is paramount in guiding bimanual coordination, but anatomical factors, too, modulate performance in bimanual tasks. Vision conveys not only abstract spatial information, but also informs about body-related aspects such as posture. Here, we asked whether, accordingly, visual information induces body-related, or merely abstract, perceptual-spatial constraints in bimanual movement guidance. Human participants made rhythmic, symmetrical and parallel, bimanual index finger movements with the hands held in the same or different orientations. Performance was more accurate for symmetrical than parallel movements in all postures, but additionally when homologous muscles were concurrently active, such as when parallel movements were performed with differently rather than identically oriented hands. Thus, both perceptual and anatomical constraints were evident. We manipulated visual feedback with a mirror between the hands, replacing the image of the left with that of the right hand and creating the visual impression of bimanual symmetry independent of the right hand’s true movement. Symmetrical mirror feedback impaired parallel, but improved symmetrical bimanual performance compared with regular hand view. Critically, these modulations were independent of hand posture and muscle homology. Thus, vision appears to contribute exclusively to spatial, but not to body-related, anatomical movement coding in the guidance of bimanual coordination.

Author(s):  
Andrew J. Kolarik ◽  
Brian C. J. Moore ◽  
Silvia Cirstea ◽  
Rajiv Raman ◽  
Sarika Gopalakrishnan ◽  
...  

AbstractVisual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.


2004 ◽  
Vol 16 (5) ◽  
pp. 828-838 ◽  
Author(s):  
Jörg Lewald ◽  
Ingo G. Meister ◽  
Jürgen Weidemann ◽  
Rudolf Töpper

The processing of auditory spatial information in cortical areas of the human brain outside of the primary auditory cortex remains poorly understood. Here we investigated the role of the superior temporal gyrus (STG) and the occipital cortex (OC) in spatial hearing using repetitive transcranial magnetic stimulation (rTMS). The right STG is known to be of crucial importance for visual spatial awareness, and has been suggested to be involved in auditory spatial perception. We found that rTMS of the right STG induced a systematic error in the perception of interaural time differences (a primary cue for sound localization in the azimuthal plane). This is in accordance with the recent view, based on both neurophysio-logical data obtained in monkeys and human neuroimaging studies, that information on sound location is processed within a dorsolateral “where” stream including the caudal STG. A similar, but opposite, auditory shift was obtained after rTMS of secondary visual areas of the right OC. Processing of auditory information in the OC has previously been shown to exist only in blind persons. Thus, the latter finding provides the first evidence of an involvement of the visual cortex in spatial hearing in sighted human subjects, and suggests a close interconnection of the neural representation of auditory and visual space. Because rTMS induced systematic shifts in auditory lateralization, but not a general deterioration, we propose that rTMS of STG or OC specifically affected neuronal circuits transforming auditory spatial coordinates in order to maintain alignment with vision.


Author(s):  
Marc Ouellet ◽  
Antonio Román ◽  
Julio Santiago

Recent studies on the conceptualization of abstract concepts suggest that the concept of time is represented along a left-right horizontal axis, such that left-to-right readers represent past on the left and future on the right. Although it has been demonstrated with strong consistency that the localization (left or right) of visual stimuli could modulate temporal judgments, results obtained with auditory stimuli are more puzzling, with both failures and successes at finding the effect in the literature. The present study supports an account based on the relative relevance of visual versus auditory-spatial information in the creation of a frame of reference to map time: The auditory location of words interacted with their temporal meaning only when auditory information was made more relevant than visual spatial information by blindfolding participants.


2019 ◽  
Vol 94 (Suppl. 1-4) ◽  
pp. 61-70 ◽  
Author(s):  
Susanne Hoffmann ◽  
Alexandra Bley ◽  
Mariana Matthes ◽  
Uwe Firzlaff ◽  
Harald Luksch

Echolocating bats evolved a sophisticated biosonar imaging system that allows for a life in dim-light habitats. However, especially for far-range operations such as homing, bats can support biosonar by vision. Large eyes and a retina that mainly consists of rods are assumed to be the optical adjustments that enable bats to use visual information at low light levels. In addition to optical mechanisms, many nocturnal animals evolved neural adaptations such as elongated integration times or enlarged spatial sampling areas to further increase the sensitivity of their visual system by temporal or spatial summation of visual information. The neural mechanisms that underlie the visual capabilities of echolocating bats have, however, so far not been investigated. To shed light on spatial and temporal response characteristics of visual neurons in an echolocating bat, Phyllostomus discolor, we recorded extracellular multiunit activity in the retino-recipient superficial layers of the superior colliculus (SC). We discovered that response latencies of these neurons were generally in the mammalian range, whereas neural spatial sampling areas were unusually large compared to those measured in the SC of other mammals. From this we suggest that echolocating bats likely use spatial but not temporal summation of visual input to improve visual performance under dim-light conditions. Furthermore, we hypothesize that bats compensate for the loss of visual spatial precision, which is a byproduct of spatial summation, by integration of spatial information provided by both the visual and the biosonar systems. Given that knowledge about neural adaptations to dim-light vision is mainly based on studies done in non-mammalian species, our novel data provide a valuable contribution to the field and demonstrate the suitability of echolocating bats as a nocturnal animal model to study the neurophysiological aspects of dim-light vision.


2017 ◽  
Vol 117 (2) ◽  
pp. 624-636 ◽  
Author(s):  
Ada Le ◽  
Michael Vesia ◽  
Xiaogang Yan ◽  
J. Douglas Crawford ◽  
Matthias Niemeier

Skillful interaction with the world requires that the brain uses a multitude of sensorimotor programs and subroutines, such as for reaching, grasping, and the coordination of the two body halves. However, it is unclear how these programs operate together. Networks for reaching, grasping, and bimanual coordination might converge in common brain areas. For example, Brodmann area 7 (BA7) is known to activate in disparate tasks involving the three types of movements separately. Here, we asked whether BA7 plays a key role in integrating coordinated reach-to-grasp movements for both arms together. To test this, we applied transcranial magnetic stimulation (TMS) to disrupt BA7 activity in the left and right hemispheres, while human participants performed a bimanual size-perturbation grasping task using the index and middle fingers of both hands to grasp a rectangular object whose orientation (and thus grasp-relevant width dimension) might or might not change. We found that TMS of the right BA7 during object perturbation disrupted the bimanual grasp and transport/coordination components, and TMS over the left BA7 disrupted unimanual grasps. These results show that right BA7 is causally involved in the integration of reach-to-grasp movements of the two arms. NEW & NOTEWORTHY Our manuscript describes a role of human Brodmann area 7 (BA7) in the integration of multiple visuomotor programs for reaching, grasping, and bimanual coordination. Our results are the first to suggest that right BA7 is critically involved in the coordination of reach-to-grasp movements of the two arms. The results complement previous reports of right-hemisphere lateralization for bimanual grasps.


2021 ◽  
Author(s):  
Margaret M. Henderson ◽  
Rosanne L. Rademaker ◽  
John T. Serences

Working memory (WM) provides flexible storage of information in service of upcoming behavioral goals. Some models propose specific fixed loci and mechanisms for the storage of visual information in WM, such as sustained spiking in parietal and prefrontal cortex during the maintenance of features. An alternative view is that information can be remembered in a flexible format that best suits current behavioral goals. For example, remembered visual information might be stored in sensory areas for easier comparison to future sensory inputs (i.e. a retrospective code) or might be remapped into a more abstract, output-oriented format and stored in motor areas (i.e. a prospective code). Here, we tested this hypothesis using a visual-spatial working memory task where the required behavioral response was either known or unknown during the memory delay period. Using fMRI and multivariate decoding, we found that there was less information about remembered spatial positions in early visual and parietal regions when the required response was known versus unknown. Further, a representation of the planned motor action emerged in primary somatosensory, primary motor, and premotor cortex on the same trials where spatial information was reduced in early visual cortex. These results suggest that the neural networks supporting WM can be strategically reconfigured depending on the specific behavioral requirements of canonical visual WM paradigms.


Author(s):  
Kun Wang ◽  
◽  
Zhongpeng Wang ◽  
Peng Zhou ◽  
Hongzhi Qi ◽  
...  

Stroke is one of the leading causes worldwide of motor disability in adults. Motor imagery is a rehabilitation technique for potentially treating the results of stroke. Based on bimanual movement coordination, we designed hand motor imagery experiments. Transcranial magnetic stimulation (TMS) was applied to the left motor cortex to produce motorevoked potentials (MEP) in the first dorsal interosseous (FDI) of the right hand. Ten subjects were required to perform three different motor imagery tasks involving the twisting of a bottle cap. The results showed that contralateral hand imagery evoked the largest MEP, meaning that the brain's motor area was activated the most. This work may prove to be significant as a reference in designing motor imagery therapy protocols for stroke patients.


2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


Sign in / Sign up

Export Citation Format

Share Document