Four-gimbal systems for simulation display

SIMULATION ◽  
1969 ◽  
Vol 12 (3) ◽  
pp. 115-120 ◽  
Author(s):  
John W. Wilson

The utility of a four-gimbal system for positioning two reference frames with respect to each other without gim bal lock has long been recognized. As early as 1954 such a system to isolate an inertial platform was proposed by Arnold and Schlesinger1 . However, the exact law for driving the fourth angle has been a question. The situa tion had not improved through 1962 where, in Reference 2, "gimbal flip" (an instantaneous rotation of 180 de grees in two axes, similar to the three-axis gimbal be havior at gimbal lock) appears as an inherent part of the fourth angle's driving law. Clearly this behavior is not desirable for platform isolation or visual display systems used for vehicle motion cues in simulation. This paper describes developments made at Langley (NASA) in the last several years. A driving law for the fourth angle is developed for two different four-gimbal systems. The first is similar to the gimbal systems de scribed in References 1 and 2 and has advantages in design and implementation for platform isolation. The second is specifically designed to minimize occlusion in visual display systems3. In each system, the fourth angle driving law is a direct consequence of maximizing the angle between the two axes causing the singularity. Thus, a necessary differ ential constraint is found to maintain a nonsingular solu tion. A sufficiency condition for a nonsingular solution is also found. The second four-gimbal system that mini mizes occlusion has the interesting result that the maxi mum angle between the singularity-forming axes is not always 90 degrees.

1991 ◽  
Vol 12 (2) ◽  
pp. 123-128
Author(s):  
Harley R Myler ◽  
Richard D Gilson

1998 ◽  
Vol 41 (1) ◽  
pp. 73-82 ◽  
Author(s):  
Dik J. Hermes

It has been shown that visual display systems of intonation can be employed beneficially in teaching intonation to persons with deafness and in teaching the intonation of a foreign language. In current training situations the correctness of a reproduced pitch contour is rated either by the teacher or automatically. In the latter case an algorithm mostly estimates the maximum deviation from an example contour. In game-like exercises, for instance, the pupil has to produce a pitch contour within the displayed floor and ceiling of a "tunnel" with a preadjusted height. In an experiment described in the companion paper, phoneticians had rated the dissimilarity of two pitch contours both auditorily, by listening to two resynthesized utterances, and visually, by looking at two pitch contours displayed on a computer screen. A test is reported in which these dissimilarity ratings were compared with automatic ratings obtained with this tunnel measure and with three other measures, the mean distance, the root-mean-square (RMS) distance, and the correlation coefficient. The most frequently used tunnel measure appeared to have the weakest correlation with the ratings by the phoneticians. In general, the automatic ratings obtained with the correlation coefficient showed the strongest correlation with the perceptual ratings. A disadvantage of this measure, however, may be that it normalizes for the range of the pitch contours. If range is important, as in intonation teaching to persons with deafness, the mean distance or the RMS distance are the best physical measures for automatic training of intonation.


2021 ◽  
Vol 79 (1) ◽  
pp. 95-116
Author(s):  
Cosimo Tuena ◽  
Valentina Mancuso ◽  
Chiara Stramba-Badiale ◽  
Elisa Pedroli ◽  
Marco Stramba-Badiale ◽  
...  

Background: Spatial navigation is the ability to estimate one’s position on the basis of environmental and self-motion cues. Spatial memory is the cognitive substrate underlying navigation and relies on two different reference frames: egocentric and allocentric. These spatial frames are prone to decline with aging and impairment is even more pronounced in Alzheimer’s disease (AD) or in mild cognitive impairment (MCI). Objective: To conduct a systematic review of experimental studies investigating which MCI population and tasks are used to evaluate spatial memory and how allocentric and egocentric deficits are impaired in MCI after navigation. Methods: PRISMA and PICO guidelines were applied to carry out the systematic search. Down and Black checklist was used to assess methodological quality. Results: Our results showed that amnestic MCI and AD pathology are the most investigated typologies; both egocentric and allocentric memory are impaired in MCI individuals, and MCI due to AD biomarkers has specific encoding and retrieval impairments; secondly, spatial navigation is principally investigated with the hidden goal task (virtual and real-world version), and among studies involving virtual reality, the privileged setting consists of non-immersive technology; thirdly, despite subtle differences, real-world and virtual versions showed good overlap for the assessment of MCI spatial memory. Conclusion: Considering that MCI is a subclinical entity with potential risk for conversion to dementia, investigating spatial memory deficits with navigation tasks might be crucial to make accurate diagnosis and rehabilitation.


2014 ◽  
Vol 369 (1635) ◽  
pp. 20130369 ◽  
Author(s):  
James J. Knierim ◽  
Joshua P. Neunuebel ◽  
Sachin S. Deshmukh

The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.


1986 ◽  
Vol 30 (3) ◽  
pp. 292-296
Author(s):  
Loy A. Anderson

Results from two experiments employing a location-cueing paradigm demonstrated that the features of a visual stimulus do not appear to be used for stimulus identification at a time prior to the localization of the stimulus by an attentional system. However, the experiments also revealed that a stimulus is processed (at least to some extent) prior to the arrival of attention at the stimulus. The results support the hypothesis that a visual stimulus must be located by an attentional system before results of initial processing of the stimulus can be used in identification. Implications for the design of visual display systems in which it is important for the user to identify stimuli both quickly and accurately are discussed.


Author(s):  
Eliab Z. Opiyo

Flat screen displays such as CRT displays, liquid crystal displays and plasma displays are predominantly used for visualization of product models in computer aided design (CAD) processes. However, future platforms for product model visualization are expected to include 3D displays as well. It can be expected that different types of display systems, each offering different visualization capability will complement the traditional flat-screen visual display units. Among the 3D display systems with biggest potential for product models visualization are holographic volumetric displays. One of the most appealing characteristic features of these displays is that they generate images with spatial representation and that appear to pop out of the flat screen. This allows multiple viewers to see 3D images or scenes from different perspectives. One of the main shortcomings of these displays, however, is that they lack suitable interfaces for interactive visualization. The work reported in this paper focused on this problem and is part of a large research in which the aim is to develop suitable interfaces for interactive viewing of holographic virtual models. Emphasis in this work was specifically on exploration of possible interaction styles and creation of a suitable interaction framework. The proposed framework consists of three interface methods: an intermediary graphical user interface (IGUI) — designed to be utilizable via a flat screen display and by using standard input devices; a gestural/hand-motions interface; and a haptic interface. Preliminary tests have shown that the IGUI helps viewers to rotate, scale and navigate virtual models in 3D scenes quickly and conveniently. On the other hand, these tests have shown that tasks such as selecting or moving virtual models in 3D scenes are not sufficiently supported by the IGUI, and that complementary interfaces may probably enable viewers to interact with models more effectively and intuitively.


1989 ◽  
Vol 33 (2) ◽  
pp. 86-90 ◽  
Author(s):  
Loran A. Haworth ◽  
Nancy Bucher ◽  
David Runnings

Simulation scientists continually pursue improved flight simulation technology with the goal of closely replicating the “real world” physical environment. The presentation/display of visual information for flight simulation is one such area enjoying recent technical improvements that are fundamental for conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for Nap-Of-the-Earth (NOE) helicopter flight simulation where the pilot maintains an “eyes-out” orientation to avoid obstructions and terrain. This paper elaborates on the visually-coupled Wide Field Of View Helmet Mounted Display (WFOVHMD) system technology as a viable visual display system for helicopter simulation. In addition the paper discusses research conducted on the NASA-Ames Vertical Motion Simulator that examined one critical research issue for helmet mounted displays.


2013 ◽  
Vol 109 (10) ◽  
pp. 2632-2644 ◽  
Author(s):  
Ian S. Howard ◽  
Daniel M. Wolpert ◽  
David W. Franklin

Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.


1975 ◽  
Author(s):  
A. A. Gordon ◽  
D. L. Patton ◽  
N. F. Richards

Sign in / Sign up

Export Citation Format

Share Document