Facilitation of learning spatial relations among locations in the absence of coincident visual cues

2009 ◽  
Author(s):  
Bradley R. Sturz ◽  
Debbie M. Kelly ◽  
Michael F. Brown
2009 ◽  
Vol 13 (2) ◽  
pp. 341-349 ◽  
Author(s):  
Bradley R. Sturz ◽  
Debbie M. Kelly ◽  
Michael F. Brown

2020 ◽  
pp. 174702182097897
Author(s):  
Yu Karen Du ◽  
Weimin Mou ◽  
Xuehui Lei

This study investigated to what extent humans can encode spatial relations between different surfaces (i.e., floor, walls, and ceiling) in a three-dimensional (3D) space and extend their headings on the floor to other surfaces when locomoting to walls (pitch 90°) and the ceiling (pitch 180°). In immersive virtual reality environments, participants first learned a layout of objects on the ground. They then navigated to testing planes: south (or north) walls facing Up, or the ceiling via walls facing North (or South). Participants locomoted to the walls with pitch rotations indicated by visual and idiothetic cues (Experiment 1) and only by visual cues (Experiment 2) and to the ceiling with visual pitch rotations only (Experiment 3). Using the memory of objects’ locations, they either reproduced the object layout on the testing plane or did a Judgements of Relative Direction (JRD) task (“imagine standing at object A, facing B, point to C”) with imagined headings of south and north on the ground. The results showed that participants who locomoted onto the wall with idiothetic cues showed a better performance in JRD for an imagined heading from which their physical heading was extended (e.g., imagined heading of North at the north wall). In addition, the participants who reproduced the layout of objects on the ceiling from a perspective extended from the ground also showed a sensorimotor alignment effect predicted by an extended heading. These results indicate that humans encode spatial relations between different surfaces and extend headings via pitch rotations three-dimensionally, especially with idiothetic cues.


Author(s):  
G. M. Cohen ◽  
J. S. Grasso ◽  
M. L. Domeier ◽  
P. T. Mangonon

Any explanation of vestibular micromechanics must include the roles of the otolithic and cupular membranes. However, micromechanical models of vestibular function have been hampered by unresolved questions about the microarchitectures of these membranes and their connections to stereocilia and supporting cells. Otolithic membranes are notoriously difficult to preserve because of severe shrinkage and loss of soluble components. We have empirically developed fixation procedures that reduce shrinkage artifacts and more accurately depict the spatial relations between the otolithic membranes and the ciliary bundles and supporting cells.We used White Leghorn chicks, ranging in age from newly hatched to one week. The inner ears were fixed for 3-24 h in 1.5-1.75% glutaraldehyde in 150 mM KCl, buffered with potassium phosphate, pH 7.3; when postfixed, it was for 30 min in 1% OsO4 alone or mixed with 1% K4Fe(CN)6. The otolithic organs (saccule, utricle, lagenar macula) were embedded in Araldite 502. Semithin sections (1 μ) were stained with toluidine blue.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


Sign in / Sign up

Export Citation Format

Share Document