scholarly journals Partial visual loss disrupts the relationship between judged room size and sound source distance

Author(s):  
Andrew J. Kolarik ◽  
Brian C. J. Moore ◽  
Silvia Cirstea ◽  
Rajiv Raman ◽  
Sarika Gopalakrishnan ◽  
...  

AbstractVisual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.

2019 ◽  
Vol 94 (Suppl. 1-4) ◽  
pp. 61-70 ◽  
Author(s):  
Susanne Hoffmann ◽  
Alexandra Bley ◽  
Mariana Matthes ◽  
Uwe Firzlaff ◽  
Harald Luksch

Echolocating bats evolved a sophisticated biosonar imaging system that allows for a life in dim-light habitats. However, especially for far-range operations such as homing, bats can support biosonar by vision. Large eyes and a retina that mainly consists of rods are assumed to be the optical adjustments that enable bats to use visual information at low light levels. In addition to optical mechanisms, many nocturnal animals evolved neural adaptations such as elongated integration times or enlarged spatial sampling areas to further increase the sensitivity of their visual system by temporal or spatial summation of visual information. The neural mechanisms that underlie the visual capabilities of echolocating bats have, however, so far not been investigated. To shed light on spatial and temporal response characteristics of visual neurons in an echolocating bat, Phyllostomus discolor, we recorded extracellular multiunit activity in the retino-recipient superficial layers of the superior colliculus (SC). We discovered that response latencies of these neurons were generally in the mammalian range, whereas neural spatial sampling areas were unusually large compared to those measured in the SC of other mammals. From this we suggest that echolocating bats likely use spatial but not temporal summation of visual input to improve visual performance under dim-light conditions. Furthermore, we hypothesize that bats compensate for the loss of visual spatial precision, which is a byproduct of spatial summation, by integration of spatial information provided by both the visual and the biosonar systems. Given that knowledge about neural adaptations to dim-light vision is mainly based on studies done in non-mammalian species, our novel data provide a valuable contribution to the field and demonstrate the suitability of echolocating bats as a nocturnal animal model to study the neurophysiological aspects of dim-light vision.


2016 ◽  
Author(s):  
Janina Brandes ◽  
Farhad Rezvani ◽  
Tobias Heed

AbstractVisual spatial information is paramount in guiding bimanual coordination, but anatomical factors, too, modulate performance in bimanual tasks. Vision conveys not only abstract spatial information, but also informs about body-related aspects such as posture. Here, we asked whether, accordingly, visual information induces body-related, or merely abstract, perceptual-spatial constraints in bimanual movement guidance. Human participants made rhythmic, symmetrical and parallel, bimanual index finger movements with the hands held in the same or different orientations. Performance was more accurate for symmetrical than parallel movements in all postures, but additionally when homologous muscles were concurrently active, such as when parallel movements were performed with differently rather than identically oriented hands. Thus, both perceptual and anatomical constraints were evident. We manipulated visual feedback with a mirror between the hands, replacing the image of the left with that of the right hand and creating the visual impression of bimanual symmetry independent of the right hand’s true movement. Symmetrical mirror feedback impaired parallel, but improved symmetrical bimanual performance compared with regular hand view. Critically, these modulations were independent of hand posture and muscle homology. Thus, vision appears to contribute exclusively to spatial, but not to body-related, anatomical movement coding in the guidance of bimanual coordination.


2021 ◽  
Author(s):  
Margaret M. Henderson ◽  
Rosanne L. Rademaker ◽  
John T. Serences

Working memory (WM) provides flexible storage of information in service of upcoming behavioral goals. Some models propose specific fixed loci and mechanisms for the storage of visual information in WM, such as sustained spiking in parietal and prefrontal cortex during the maintenance of features. An alternative view is that information can be remembered in a flexible format that best suits current behavioral goals. For example, remembered visual information might be stored in sensory areas for easier comparison to future sensory inputs (i.e. a retrospective code) or might be remapped into a more abstract, output-oriented format and stored in motor areas (i.e. a prospective code). Here, we tested this hypothesis using a visual-spatial working memory task where the required behavioral response was either known or unknown during the memory delay period. Using fMRI and multivariate decoding, we found that there was less information about remembered spatial positions in early visual and parietal regions when the required response was known versus unknown. Further, a representation of the planned motor action emerged in primary somatosensory, primary motor, and premotor cortex on the same trials where spatial information was reduced in early visual cortex. These results suggest that the neural networks supporting WM can be strategically reconfigured depending on the specific behavioral requirements of canonical visual WM paradigms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Loes Ottink ◽  
Marit Hoogendonk ◽  
Christian F. Doeller ◽  
Thea M. Van der Geest ◽  
Richard J. A. Van Wezel

AbstractIn this study, we compared cognitive map formation of small-scale models of city-like environments presented in visual or tactile/haptic modalities. Previous research often addresses only a limited amount of cognitive map aspects. We wanted to combine several of these aspects to elucidate a more complete view. Therefore, we assessed different types of spatial information, and consider egocentric as well as allocentric perspectives. Furthermore, we compared haptic map learning with visual map learning. In total 18 sighted participants (9 in a haptic condition, 9 visuo-haptic) learned three tactile maps of city-like environments. The maps differed in complexity, and had five marked locations associated with unique items. Participants estimated distances between item pairs, rebuilt the map, recalled locations, and navigated two routes, after learning each map. All participants overall performed well on the spatial tasks. Interestingly, only on the complex maps, participants performed worse in the haptic condition than the visuo-haptic, suggesting no distinct advantage of vision on the simple map. These results support ideas of modality-independent representations of space. Although it is less clear on the more complex maps, our findings indicate that participants using only haptic or a combination of haptic and visual information both form a quite accurate cognitive map of a simple tactile city-like map.


2021 ◽  
Vol 11 (6) ◽  
pp. 796
Author(s):  
Micaela Maria Zucchelli ◽  
Laura Piccardi ◽  
Raffaella Nori

Individuals with agoraphobia exhibit impaired exploratory activity when navigating unfamiliar environments. However, no studies have investigated the contribution of visuospatial working memory (VSWM) in these individuals’ ability to acquire and process spatial information while considering the use of egocentric and allocentric coordinates or environments with or without people. A total of 106 individuals (53 with agoraphobia and 53 controls) navigated in a virtual square to acquire spatial information that included the recognition of landmarks and the relationship between landmarks and themselves (egocentric coordinates) and independent of themselves (allocentric coordinates). Half of the participants in both groups navigated in a square without people, and half navigated in a crowded square. They completed a VSWM test in addition to tasks measuring landmark recognition and egocentric and allocentric judgements concerning the explored square. The results showed that individuals with agoraphobia had reduced working memory only when active processing of spatial elements was required, suggesting that they exhibit spatial difficulties particularly in complex spatial tasks requiring them to process information simultaneously. Specifically, VSWM deficits mediated the relationship between agoraphobia and performance in the allocentric judgements. The results are discussed considering the theoretical background of agoraphobia in order to provide useful elements for the early diagnosis of this disorder.


2015 ◽  
Vol 16 (2) ◽  
pp. 255-262 ◽  
Author(s):  
Shigeyuki Kuwada ◽  
Duck O. Kim ◽  
Kelly-Jo Koch ◽  
Kristina S. Abrams ◽  
Fabio Idrobo ◽  
...  

2001 ◽  
Vol 31 (5) ◽  
pp. 915-922 ◽  
Author(s):  
S. KÉRI ◽  
O. KELEMEN ◽  
G. BENEDEK ◽  
Z. JANKA

Background. The aim of this study was to assess visual information processing and cognitive functions in unaffected siblings of patients with schizophrenia, bipolar disorder and control subjects with a negative family history.Methods. The siblings of patients with schizophrenia (N = 25), bipolar disorder (N = 20) and the controls subjects (N = 20) were matched for age, education, IQ, and psychosocial functioning, as indexed by the Global Assessment of Functioning scale. Visual information processing was measured using two visual backward masking (VBM) tests (target location and target identification). The evaluation of higher cognitive functions included spatial and verbal working memory, Wisconsin Card Sorting Test, letter fluency, short/long delay verbal recall and recognition.Results. The relatives of schizophrenia patients were impaired in the VBM procedure, more pronouncedly at short interstimulus intervals (14, 28, 42 ms) and in the target location task. Marked dysfunctions were also found in the spatial working memory task and in the long delay verbal recall test. In contrast, the siblings of patients with bipolar disorder exhibited spared performances with the exception of a deficit in the long delay recall task.Conclusions. Dysfunctions of sensory-perceptual analysis (VBM) and working memory for spatial information distinguished the siblings of schizophrenia patients from the siblings of individuals with bipolar disorder. Verbal recall deficit was present in both groups, suggesting a common impairment of the fronto-hippocampal system.


2021 ◽  
Vol 33 (3) ◽  
pp. 506-511
Author(s):  
Sheikh Mohd Saleem ◽  
Chaitnya Aggarwal ◽  
Om Prakash Bera ◽  
Radhika Rana ◽  
Gurmandeep Singh ◽  
...  

"Geographic information system (GIS) collects various kinds of data based on the geographic relationship across space." Data in GIS is stored to visualize, analyze, and interpret geographic data to learn about an area, an ongoing project, site planning, business, health economics and health-related surveys and information. GIS has evolved from ancient disease maps to 3D digital maps and continues to grow even today. The visual-spatial mapping of the data has given us an insight into different diseases ranging from diarrhea, pneumonia to non-communicable diseases like diabetes mellitus, hypertension, cardiovascular diseases, or risk factors like obesity, being overweight, etc. All in a while, this information has highlighted health-related issues and knowledge about these in a contemporary manner worldwide. Researchers, scientists, and administrators use GIS for research project planning, execution, and disease management. Cases of diseases in a specific area or region, the number of hospitals, roads, waterways, and health catchment areas are examples of spatially referenced data that can be captured and easily presented using GIS. Currently, we are facing an epidemic of non-communicable diseases, and a powerful tool like GIS can be used efficiently in such a situation. GIS can provide a powerful and robust framework for effectively monitoring and identifying the leading cause behind such diseases.  GIS, which provides a spatial viewpoint regarding the disease spectrum, pattern, and distribution, is of particular importance in this area and helps better understand disease transmission dynamics and spatial determinants. The use of GIS in public health will be a practical approach for surveillance, monitoring, planning, optimization, and service delivery of health resources to the people at large. The GIS platform can link environmental and spatial information with the disease itself, which makes it an asset in disease control progression all over the globe.


2021 ◽  
Vol 12 ◽  
Author(s):  
Anne Giersch ◽  
Thomas Huard ◽  
Sohee Park ◽  
Cherise Rosen

The experience of oneself in the world is based on sensory afferences, enabling us to reach a first-perspective perception of our environment and to differentiate oneself from the world. Visual hallucinations may arise from a difficulty in differentiating one's own mental imagery from externally-induced perceptions. To specify the relationship between hallucinations and the disorders of the self, we need to understand the mechanisms of hallucinations. However, visual hallucinations are often under reported in individuals with psychosis, who sometimes appear to experience difficulties describing them. We developed the “Strasbourg Visual Scale (SVS),” a novel computerized tool that allows us to explore and capture the subjective experience of visual hallucinations by circumventing the difficulties associated with verbal descriptions. This scale reconstructs the hallucinated image of the participants by presenting distinct physical properties of visual information, step-by-step to help them communicate their internal experience. The strategy that underlies the SVS is to present a sequence of images to the participants whose choice at each step provides a feedback toward re-creating the internal image held by them. The SVS displays simple images on a computer screen that provide choices for the participants. Each step focuses on one physical property of an image, and the successive choices made by the participants help them to progressively build an image close to his/her hallucination, similar to the tools commonly used to generate facial composites. The SVS was constructed based on our knowledge of the visual pathways leading to an integrated perception of our environment. We discuss the rationale for the successive steps of the scale, and to which extent it could complement existing scales.


Sign in / Sign up

Export Citation Format

Share Document