Auditory Spatial Information and Head-Coupled Display Systems

1988 ◽  
Vol 32 (2) ◽  
pp. 75-75
Author(s):  
Thomas Z. Strybel

Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.

2019 ◽  
Vol 32 (2) ◽  
pp. 87-109 ◽  
Author(s):  
Galit Buchs ◽  
Benedetta Heimler ◽  
Amir Amedi

Abstract Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.


1992 ◽  
Vol 9 (4) ◽  
pp. 343-352
Author(s):  
Geert J.P. Savelsbergh ◽  
J. Bernard Netelenbos

Spatial information for the execution of motor behavior is acquired by orienting eye and head movements. This information can be found in our direct field of view as well as outside this field. Auditory information is especially helpful in directing our attention to information outside our initial visual field of view. Two topics on the effect of an auditory loss are discussed. Experimental evidence is provided which shows that deaf children have problems in orienting to visual stimuli situated outside their field of view. An overview is given from several studies in which the eye and head movements of deaf children are analyzed. Second, it is suggested that specific visual localization problems are partly responsible for deaf children’s characteristic lag in motor development. The latter is illustrated in two studies involving the gross motor task of ball catching.


2021 ◽  
Author(s):  
Edward H Silson ◽  
Iris Isabelle Anna Groen ◽  
Chris I Baker

Human visual cortex is organised broadly according to two major principles: retinotopy (the spatial mapping of the retina in cortex) and category-selectivity (preferential responses to specific categories of stimuli). Historically, these principles were considered anatomically separate, with retinotopy restricted to the occipital cortex and category-selectivity emerging in lateral-occipital and ventral-temporal cortex. Contrary to this assumption, recent studies show that category-selective regions exhibit systematic retinotopic biases. It is unclear, however, whether responses within these regions are more strongly driven by retinotopic location or by category preference, and if there are systematic differences between category-selective regions in the relative strengths of these preferences. Here, we directly compare spatial and category preferences by measuring fMRI responses to scene and face stimuli presented in the left or right visual field and computing two bias indices: a spatial bias (response to the contralateral minus ipsilateral visual field) and a category bias (response to the preferred minus non-preferred category). We compare these biases within and between scene- and face-selective regions across the lateral and ventral surfaces of visual cortex. We find an interaction between surface and bias: lateral regions show a stronger spatial than category bias, whilst ventral regions show the opposite. These effects are robust across and within subjects, and reflect large-scale, smoothly varying gradients across both surfaces. Together, these findings support distinct functional roles for lateral and ventral category-selective regions in visual information processing in terms of the relative importance of spatial information.


2014 ◽  
Vol 26 (12) ◽  
pp. 2827-2839 ◽  
Author(s):  
Maria J. S. Guerreiro ◽  
Joaquin A. Anguera ◽  
Jyoti Mishra ◽  
Pascal W. M. Van Gerven ◽  
Adam Gazzaley

Selective attention involves top–down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top–down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top–down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Mai M Morimoto ◽  
Aljoscha Nern ◽  
Arthur Zhao ◽  
Edward M Rogers ◽  
Allan M Wong ◽  
...  

Visual systems can exploit spatial correlations in the visual scene by using retinotopy, the organizing principle by which neighboring cells encode neighboring spatial locations. However, retinotopy is often lost, such as when visual pathways are integrated with other sensory modalities. How is spatial information processed outside of strictly visual brain areas? Here, we focused on visual looming responsive LC6 cells in Drosophila, a population whose dendrites collectively cover the visual field, but whose axons form a single glomerulus—a structure without obvious retinotopic organization—in the central brain. We identified multiple cell types downstream of LC6 in the glomerulus and found that they more strongly respond to looming in different portions of the visual field, unexpectedly preserving spatial information. Through EM reconstruction of all LC6 synaptic inputs to the glomerulus, we found that LC6 and downstream cell types form circuits within the glomerulus that enable spatial readout of visual features and contralateral suppression—mechanisms that transform visual information for behavioral control.


2020 ◽  
Author(s):  
Mai M. Morimoto ◽  
Aljoscha Nern ◽  
Arthur Zhao ◽  
Edward M. Rogers ◽  
Allan M. Wong ◽  
...  

AbstractVisual systems can exploit spatial correlations in the visual scene by using retinotopy, the organizing principle by which neighboring cells encode neighboring spatial locations. However, retinotopy is often lost, such as when visual pathways are integrated with other sensory modalities. How is spatial information processed in the absence of retinotopy? Here, we focused on visual looming responsive LC6 cells in Drosophila, a population whose dendrites collectively tile the visual field, but whose axons form a single glomerulus—a structure lacking retinotopic organization—in the central brain. We identified multiple glomerulus neurons and found that they respond to looming in different portions of the visual field, unexpectedly preserving spatial information. Through EM reconstruction of all LC6 synaptic inputs to the glomerulus, we found that LC6 and downstream cell types form circuits within the glomerulus that establish spatial readout of visual features and contralateral suppression—mechanisms that transform visual information for behavioral control.


1975 ◽  
Vol 27 (2) ◽  
pp. 161-164 ◽  
Author(s):  
Graham Hitch ◽  
John Morton

The superiority of auditory over visual presentation in short-term serial recall may be due to the fact that typically only temporal cues to order have been provided in the two modalities. Auditory information is usually ordered along a temporal continuum, whereas visual information is ordered spatially, as well. It is therefore possible that recall following visual presentation may benefit from spatial cues to order. Subjects were tested for serial recall of letter-sequences presented visually either with or without explicit spatial cues to order. No effect of any kind was found, a result which suggests (a) that spatial information is not utilized when it is redundant with temporal information and (b) that the auditory-visual difference would not be modified by the presence of explicit spatial cues to order.


MRS Advances ◽  
2018 ◽  
Vol 3 (39) ◽  
pp. 2341-2346 ◽  
Author(s):  
Scott Annett ◽  
Sergio Morelhao ◽  
Darren Dale ◽  
Stefan Kycia

AbstractThree dimensional X-ray diffraction (3DXRD) microscopy is a powerful technique that provides crystallographic and spatial information of a large number, of the order of thousands, of crystalline grains in a sample simultaneously. A key component of every 3DXRD microscopy experiment is the near field detector that provides high resolution spatial information of the grains. In this work we present a novel design for a semi-transparent, 16 megapixel near field detector. As opposed to a typical single scintillator phosphor detector, this design, we call the Quad Near Field Detector, uses four quadrants. It has a total field of view is 5.3 mm x 5.3 mm with an effective pixel size of 1.3 µm x 1.3 µm. The detector’s relatively large field of view can be used to obtain higher order diffraction spots which we anticipate will lead to improved spatial resolution in grain reconstructions. The large field of view can also enable the detector to be positioned further from the sample, in this way increasing the working distance and enabling larger environmental cells for in-situ studies. Many alignment parameters can be resolved by careful mechanical design. For this reason a novel translation stage for focusing the microscopes was developed, tested, and implemented. The near field detector was calibrated and characterized at the Cornell High Energy Synchrotron Source. The operational feasibility of such a multi-plate detector demonstrated in this work paves the way for new technologies in instrumentation of 3DXRD microscopy.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2567
Author(s):  
Dong-hoon Kwak ◽  
Seung-ho Lee

Modern image processing techniques use three-dimensional (3D) images, which contain spatial information such as depth and scale, in addition to visual information. These images are indispensable in virtual reality, augmented reality (AR), and autonomous driving applications. We propose a novel method to estimate monocular depth using a cycle generative adversarial network (GAN) and segmentation. In this paper, we propose a method for estimating depth information by combining segmentation. It uses three processes: segmentation and depth estimation, adversarial loss calculations, and cycle consistency loss calculations. The cycle consistency loss calculation process evaluates the similarity of two images when they are restored to their original forms after being estimated separately from two adversarial losses. To evaluate the objective reliability of the proposed method, we compared our proposed method with other monocular depth estimation (MDE) methods using the NYU Depth Dataset V2. Our results show that the benchmark value for our proposed method is better than other methods. Therefore, we demonstrated that our proposed method is more efficient in determining depth estimation.


Sign in / Sign up

Export Citation Format

Share Document