scholarly journals Sharing spatial information in a virtual environment: How do visual cues and configuration influence spatial coding and mental workload?

2020 ◽  
Vol 24 (4) ◽  
pp. 695-712
Author(s):  
Isabelle Milleville-Pennel ◽  
Franck Mars ◽  
Lauriane Pouliquen-Lardy
2017 ◽  
Author(s):  
M. Murugan ◽  
M. Park ◽  
J. Taliaferro ◽  
H.J. Jang ◽  
J. Cox ◽  
...  

Social interactions are crucial to the survival and well-being of all mammals, including humans. Although the prelimbic cortex (PL, part of medial prefrontal cortex) has been implicated in social behavior, it is not clear which neurons are relevant, nor how they contribute. We found that the PL contains anatomically and molecularly distinct subpopulations of neurons that target 3 downstream regions that have been implicated in social behavior: the nucleus accumbens (NAc), the amygdala, and the ventral tegmental area. Activation of NAc-projecting PL neurons (PL-NAc), but not the other subpopulations, decreased preference for a social target, suggesting an unique contribution of this population to social behavior. To determine what information PL-NAc neurons convey, we recorded selectively from them, and found that individual neurons were active during social investigation, but only in specific spatial locations. Spatially-specific inhibition of these neurons prevented the formation of a social-spatial association at the inhibited location. In contrast, spatially nonspecific inhibition did not affect social behavior. Thus, the unexpected combination of social and spatial information within the PL-NAc population appears to support socially motivated behavior by enabling the formation of social-spatial associations.


Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


2019 ◽  
Vol 25 (Suppl. 1-2) ◽  
pp. 60-71 ◽  
Author(s):  
Nikolaus E. Wolter ◽  
Karen A. Gordon ◽  
Jennifer L. Campos ◽  
Luis D. Vilchez Madrigal ◽  
David D. Pothier ◽  
...  

Introduction: To determine the impact of a head-referenced cochlear implant (CI) stimulation system, BalanCI, on balance and postural control in children with bilateral cochleovestibular loss (BCVL) who use bilateral CI. Methods: Prospective, blinded case-control study. Balance and postural control testing occurred in two settings: (1) quiet clinical setting and (2) immersive realistic virtual environment (Challenging Environment Assessment Laboratory [CEAL], Toronto Rehabilitation Institute). Postural control was assessed in 16 and balance in 10 children with BCVL who use bilateral CI, along with 10 typically developing children. Children with neuromotor, cognitive, or visual deficits that would prevent them from performing the tests were excluded. Children wore the BalanCI, which is a head-mounted device that couples with their CIs through the audio port and provides head-referenced spatial information delivered via the intracochlear electrode array. Postural control was measured by center of pressure (COP) and time to fall using the WiiTM (Nintendo, WA, USA) Balance Board for feet and the BalanCI for head, during the administration of the Modified Clinical Test of Sensory Interaction in Balance (CTSIB-M). The COP of the head and feet were assessed for change by deviation, measured as root mean square around the COP (COP-RMS), rate of deviation (COP-RMS/duration), and rate of path length change from center (COP-velocity). Balance was assessed by the Bruininks-Oseretsky Test of Motor Proficiency 2, balance subtest (BOT-2), specifically, BOT-2 score as well as time to fall/fault. Results: In the virtual environment, children demonstrated more stable balance when using BalanCI as measured by an improvement in BOT-2 scores. In a quiet clinical setting, the use of BalanCI led to improved postural control as demonstrated by significant reductions in COP-RMS and COP-velocity. With the use of BalanCI, the number of falls/faults was significantly reduced and time to fall increased. Conclusions: BalanCI is a simple and effective means of improving postural control and balance in children with BCVL who use bilateral CI. BalanCI could potentially improve the safety of these children, reduce the effort they expend maintaining balance and allow them to take part in more complex balance tasks where sensory information may be limited and/or noisy.


2018 ◽  
Vol 84 (3) ◽  
pp. 330-343 ◽  
Author(s):  
Konstantinos Papadopoulos ◽  
Marialena Barouti ◽  
Eleni Koustriava

To examine how individuals with visual impairments understand space and the way they develop cognitive maps, we studied the differences in cognitive maps resulting from different methods and tools for spatial coding in large geographical spaces. We examined the ability of 21 blind individuals to create cognitive maps of routes in unfamiliar areas using (a) audiotactile maps, (b) tactile maps, and (c) direct experience of movement along the routes. We also compared participants’ cognitive maps created with the use of audiotactile maps, tactile maps, and independent movement along the routes with regard to their precision (i.e., the correctness or incorrectness of spatial information location) and inclusiveness (i.e., the amount of spatial information included correctly in the cognitive map). The results of the experimental trials demonstrated that becoming familiar with an area is easier for blind individuals when they use a tactile aide, such as an audiotactile map, as compared with walking along the route.


1999 ◽  
Vol 8 (6) ◽  
pp. 671-685 ◽  
Author(s):  
Jui Lin Chen ◽  
Kay M. Stanney

This paper proposes a theoretical model of wayfinding that can be used to guide the design of navigational aiding in virtual environments. Based on an evaluation of wayfinding studies in natural environments, this model divides the wayfinding process into three main subprocesses: cognitive mapping, wayfinding plan development, and physical movement or navigation through an environment. While this general subdivision has been proposed before, the current model further delineates the wayfinding process, including the distinct influences of spatial information, spatial orientation, and spatial knowledge. The influences of experience, abilities, search strategies, motivation, and environmental layout on the wayfinding process are also considered. With this specification of the wayfinding process, a taxonomy of navigational tools is then proposed that can be used to systematically aid the specified wayfinding subprocesses. If effectively applied to the design of a virtual environment, the use of such tools should lead to reduced disorientation and enhanced wayfinding in large-scale virtual spaces. It is also suggested that, in some cases, this enhanced wayfinding performance may be at the expense of the acquisition of an accurate cognitive map of the virtual environment being traversed.


2006 ◽  
Vol 84 (6) ◽  
pp. 871-876 ◽  
Author(s):  
Annalisa Paglianti ◽  
Giuseppe Messana ◽  
Alessandro Cianfanelli ◽  
Roberto Berti

Spatial knowledge of the surrounding environment is extremely important for animals to locate and efficiently exploit available resources (e.g., food, shelters, mates). Fishes usually acquire spatial information about their home range through vision, but vision fails in the dark and other sensory pathways have to be exploited. Fishes possess a remarkable olfactory system and have evolved a refined ability of chemical detection and recognition. Nevertheless, while the role of chemical cues in spatial orientation is well known in long-distance salmonid migrations, it has never been investigated in orientation within local, familiar areas. Here we report the first evidence that fish swimming can be topographically polarized by self-odour perception. When an unfamiliar area was experimentally scented with fish self-odour, the cave cyprinid Phreatichthys andruzzii Vinciguerra, 1924 behaved as if the area was previously explored. The fish preferred an odour-free area to a self-odour-scented one, and when offered the choice between a familiar and an unfamiliar area, they preferred the unexplored environment. Avoidance of self-odour-scented areas would allow effective exploration of the subterranean environment, minimizing the risks of repeatedly exploring the same water volumes. Our results are the first clear evidence that fish can use their own odour to orient their locomotor activity when visual cues are not available. This highlights the possible role of chemical information in fish orientation.


1996 ◽  
Vol 5 (3) ◽  
pp. 330-345 ◽  
Author(s):  
Edward J. Rinalducci

This paper provides an overview of the literature on the visual system, placing special emphasis on those visual characteristics regarded as necessary to produce adequate visual fidelity in virtual environments. These visual cues apply to the creation of various virtual environments including those involving flying, driving, sailing, or walking. A variety of cues are examined, in particular, motion, color, stereopsis, pictorial and secondary cues, physiological cues, texture, vertical development, luminance, field-of-view, and spatial resolution. Conclusions and recommendations for research are also presented.


2020 ◽  
Author(s):  
Mary Ann Go ◽  
Jake Rogers ◽  
Giuseppe P. Gava ◽  
Catherine Davey ◽  
Seigfred Prado ◽  
...  

ABSTRACTThe hippocampal place cell system in rodents has provided a major paradigm for the scientific investigation of memory function and dysfunction. Place cells have been observed in area CA1 of the hippocampus of both freely moving animals, and of head-fixed animals navigating in virtual reality environments. However, spatial coding in virtual reality preparations has been observed to be impaired. Here we show that the use of a real-world environment system for head-fixed mice, consisting of a track floating on air, provides some advantages over virtual reality systems for the study of spatial memory. We imaged the hippocampus of head-fixed mice injected with the genetically encoded calcium indicator GCaMP6s while they navigated circularly constrained or open environments on the floating platform. We observed consistent place tuning in a substantial fraction of cells with place fields remapping when animals entered a different environment. When animals re-entered the same environment, place fields typically remapped over a time period of multiple days, faster than in freely moving preparations, but comparable with virtual reality. Spatial information rates were within the range observed in freely moving mice. Manifold analysis indicated that spatial information could be extracted from a low-dimensional subspace of the neural population dynamics. This is the first demonstration of place cells in head-fixed mice navigating on an air-lifted real-world platform, validating its use for the study of brain circuits involved in memory and affected by neurodegenerative disorders.


Author(s):  
Lorin Timaeus ◽  
Laura Geid ◽  
Gizem Sancer ◽  
Mathias F. Wernet ◽  
Thomas Hummel

SummaryOne hallmark of the visual system is the strict retinotopic organization from the periphery towards the central brain, spanning multiple layers of synaptic integration. Recent Drosophila studies on the computation of distinct visual features have shown that retinotopic representation is often lost beyond the optic lobes, due to convergence of columnar neuron types onto optic glomeruli. Nevertheless, functional imaging revealed a spatially accurate representation of visual cues in the central complex (CX), raising the question how this is implemented on a circuit level. By characterizing the afferents to a specific visual glomerulus, the anterior optic tubercle (AOTU), we discovered a spatial segregation of topographic versus non-topographic projections from molecularly distinct classes of medulla projection neurons (medullo-tubercular, or MeTu neurons). Distinct classes of topographic versus non-topographic MeTus form parallel channels, terminating in separate AOTU domains. Both types then synapse onto separate matching topographic fields of tubercular-bulbar (TuBu) neurons which relay visual information towards the dendritic fields of central complex ring neurons in the bulb neuropil, where distinct bulb sectors correspond to a distinct ring domain in the ellipsoid body. Hence, peripheral topography is maintained due to stereotypic circuitry within each TuBu class, providing the structural basis for spatial representation of visual information in the central complex. Together with previous data showing rough topography of lobula projections to a different AOTU subunit, our results further highlight the AOTUs role as a prominent relay station for spatial information from the retina to the central brain.


Sign in / Sign up

Export Citation Format

Share Document