Divers Augmented Vision Display (DAVD)

Author(s):  
Richard J. Manley ◽  
Dennis G. Gallagher ◽  
William W. Hughes ◽  
Allie M. Pilcher

Military diving operations are routinely conducted in what can be one of the most inhospitable environments on the planet, frequently characterized by zero visibility. The inability to clearly see the immediate operational environment has historically been a serious limitation to manned diving operations — whether the mission is ship husbandry, under water construction, salvage, or scientific research. U.S. Navy diving is an integral part of the nation’s defense strategy with a continuing requirement to conduct manned intervention in the water column. To ensure technical superiority across the entire spectrum of diving operations we must identify, exploit, and de velop technology to advance the state-of-the-art in diving equipment. This can only be achieved by investing in, and supporting, focused research and development with specific goals to further diving capabilities. Under a project sponsored by the Office of Naval Research (ONR) and Naval Sea Systems Command (NAVSEA), the Naval Surface Warfare Center-Panama City Division (NSWC PCD) has de veloped a prototype see-through head-up display system for a U. S. Navy diving helmet — the Divers Augmented Vision Display (DAVD). The DAVD system uses waveguide optical display modules that couple images from a micro display into a waveguide optic, translate the images through a series of internal reflections, finally exiting toward the diver’s eye providing a magnified, see-through virtual image at a specific distance in front of the diver. The virtual images can be critical information and sensor data including sonar images, ship husbandry and underwater construction schematics, enhanced navigation displays, augmented reality, and text messages. NSWC PCD is the U.S. Navy’s leading laboratory for research, development, testing, evaluation, and technology transition of diver visual display systems; with unique facilities for rapid prototyping and manufacturing, human systems integration and extreme environment testing. Along with NSWC PCD, the Navy Experimental Diving Unit (NEDU), and Naval Diving and Salvage Training Center (NDSTC) are co-located tenant commands at the Naval Support Activity Panama City (NSA PC). This paper provides a brief background on the development of diver head-up display systems, waveguide optical display technology, development of the DAVD prototype, results of diver evaluations, and recommendations for accelerated development of this game changing capability.

1991 ◽  
Vol 12 (2) ◽  
pp. 123-128
Author(s):  
Harley R Myler ◽  
Richard D Gilson

1998 ◽  
Vol 41 (1) ◽  
pp. 73-82 ◽  
Author(s):  
Dik J. Hermes

It has been shown that visual display systems of intonation can be employed beneficially in teaching intonation to persons with deafness and in teaching the intonation of a foreign language. In current training situations the correctness of a reproduced pitch contour is rated either by the teacher or automatically. In the latter case an algorithm mostly estimates the maximum deviation from an example contour. In game-like exercises, for instance, the pupil has to produce a pitch contour within the displayed floor and ceiling of a "tunnel" with a preadjusted height. In an experiment described in the companion paper, phoneticians had rated the dissimilarity of two pitch contours both auditorily, by listening to two resynthesized utterances, and visually, by looking at two pitch contours displayed on a computer screen. A test is reported in which these dissimilarity ratings were compared with automatic ratings obtained with this tunnel measure and with three other measures, the mean distance, the root-mean-square (RMS) distance, and the correlation coefficient. The most frequently used tunnel measure appeared to have the weakest correlation with the ratings by the phoneticians. In general, the automatic ratings obtained with the correlation coefficient showed the strongest correlation with the perceptual ratings. A disadvantage of this measure, however, may be that it normalizes for the range of the pitch contours. If range is important, as in intonation teaching to persons with deafness, the mean distance or the RMS distance are the best physical measures for automatic training of intonation.


2003 ◽  
Vol 12 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Winyu Chinthammit ◽  
Eric J. Seibel ◽  
Thomas A. Furness

The operation and performance of a six degree-of-freedom (DOF) shared-aperture tracking system with image overlay is described. This unique tracking technology shares the same aperture or scanned optical beam with the visual display, virtual retinal display (VRD). This display technology provides high brightness in an AR helmet-mounted display, especially in the extreme environment of a military cockpit. The VRD generates an image by optically scanning visible light directly to the viewer's eye. By scanning both visible and infrared light, the head-worn display can be directly coupled to a head-tracking system. As a result, the proposed tracking system requires minimal calibration between the user's viewpoint and the tracker's viewpoint. This paper demonstrates that the proposed shared-aperture tracking system produces high accuracy and computational efficiency. The current proof-of-concept system has a precision of +/− 0.05 and +/− 0.01 deg. in the horizontal and vertical axes, respectively. The static registration error was measured to be 0.08 +/− 0.04 and 0.03 +/− 0.02 deg. for the horizontal and vertical axes, respectively. The dynamic registration error or the system latency was measured to be within 16.67 ms, equivalent to our display refresh rate of 60 Hz. In all testing, the VRD was fixed and the calibrated motion of a robot arm was tracked. By moving the robot arm within a restricted volume, this real-time shared-aperture method of tracking was extended to six-DOF measurements. Future AR applications of our shared-aperture tracking and display system will be highly accurate head tracking when the VRD is helmet mounted and worn within an enclosed space, such as an aircraft cockpit.


1986 ◽  
Vol 30 (3) ◽  
pp. 292-296
Author(s):  
Loy A. Anderson

Results from two experiments employing a location-cueing paradigm demonstrated that the features of a visual stimulus do not appear to be used for stimulus identification at a time prior to the localization of the stimulus by an attentional system. However, the experiments also revealed that a stimulus is processed (at least to some extent) prior to the arrival of attention at the stimulus. The results support the hypothesis that a visual stimulus must be located by an attentional system before results of initial processing of the stimulus can be used in identification. Implications for the design of visual display systems in which it is important for the user to identify stimuli both quickly and accurately are discussed.


Author(s):  
Eliab Z. Opiyo

Flat screen displays such as CRT displays, liquid crystal displays and plasma displays are predominantly used for visualization of product models in computer aided design (CAD) processes. However, future platforms for product model visualization are expected to include 3D displays as well. It can be expected that different types of display systems, each offering different visualization capability will complement the traditional flat-screen visual display units. Among the 3D display systems with biggest potential for product models visualization are holographic volumetric displays. One of the most appealing characteristic features of these displays is that they generate images with spatial representation and that appear to pop out of the flat screen. This allows multiple viewers to see 3D images or scenes from different perspectives. One of the main shortcomings of these displays, however, is that they lack suitable interfaces for interactive visualization. The work reported in this paper focused on this problem and is part of a large research in which the aim is to develop suitable interfaces for interactive viewing of holographic virtual models. Emphasis in this work was specifically on exploration of possible interaction styles and creation of a suitable interaction framework. The proposed framework consists of three interface methods: an intermediary graphical user interface (IGUI) — designed to be utilizable via a flat screen display and by using standard input devices; a gestural/hand-motions interface; and a haptic interface. Preliminary tests have shown that the IGUI helps viewers to rotate, scale and navigate virtual models in 3D scenes quickly and conveniently. On the other hand, these tests have shown that tasks such as selecting or moving virtual models in 3D scenes are not sufficiently supported by the IGUI, and that complementary interfaces may probably enable viewers to interact with models more effectively and intuitively.


1989 ◽  
Vol 33 (2) ◽  
pp. 86-90 ◽  
Author(s):  
Loran A. Haworth ◽  
Nancy Bucher ◽  
David Runnings

Simulation scientists continually pursue improved flight simulation technology with the goal of closely replicating the “real world” physical environment. The presentation/display of visual information for flight simulation is one such area enjoying recent technical improvements that are fundamental for conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for Nap-Of-the-Earth (NOE) helicopter flight simulation where the pilot maintains an “eyes-out” orientation to avoid obstructions and terrain. This paper elaborates on the visually-coupled Wide Field Of View Helmet Mounted Display (WFOVHMD) system technology as a viable visual display system for helicopter simulation. In addition the paper discusses research conducted on the NASA-Ames Vertical Motion Simulator that examined one critical research issue for helmet mounted displays.


Author(s):  
David L. Hall ◽  
Robert J. Hansen ◽  
Derek C. Lang

Condition-based maintenance (CBM) is an emerging technology which seeks to develop sensors and processing systems aimed at monitoring the operation of complex machinery such as turbine engines, rotor craft drive trains, or industrial equipment. The goal of CBM systems is to determine the state of the equipment (i.e., the mechanical health and status), and to predict the remaining useful life for the system being monitored. The success of such systems depends upon a number of factors including: (1) the ability to design or use robust sensors for measuring relevant phenomena such as vibration, acoustic spectra, infrared emissions, oil debris, etc.; (2) real time processing of the sensor data to extract useful information (such as features or data characteristics) in a noisy environment and to detect parametric changes which might be indicative of impending failure conditions; (3) fusion of multi-sensor data to obtain improved information beyond that available to a single sensor; (4) micro and macro level models which predict the temporal evolution of failure phenomena; and finally, (5) the capability to perform automated approximate reasoning to interpret the results of the sensor measurements, processed data, and model predictions in the context of an operational environment. The latter capability is the focus of this paper. Although numerous techniques have emerged from the discipline of artificial intelligence for automated reasoning (e.g., rule-based expert systems, blackboard systems, case-based reasoning, neural networks, etc.), none of these techniques are able to satisfy all of the requirements for reasoning about condition-based maintenance. This paper provides an assessment of automated reasoning techniques for CBM and identifies a particular problem for CBM, namely, the ability to reason with negative information (viz., data which by it’s absence is indicative of mechanical status and health). A general architecture is introduced for CBM automated reasoning, which hierarchically combines implicit and explicit reasoning techniques. Initial experiments with fuzzy logic are also described.


Author(s):  
Karen N. Stone ◽  
Jay J. Cho ◽  
Kristi J. McKinney

Abstract No.:1141265 In the decade following the Deepwater Horizon catastrophe, considerable research and development has been accomplished to address known research gaps to respond to offshore oil spills; however, opportunities to enhance spill response capabilities remain. The Bureau of Safety and Environmental Enforcement (BSEE) is the lead agency in the U.S. regulating energy production on the U.S. Outer Continental Shelf. BSEE's Oil Spill Response Research (OSRR) program is the principal federal source of oil spill response research to improve the detection, containment, treatment/cleanup of oil spills and strives to provide the best available information, science, research, and technology development to key decision makers, industry, and the oil spill response community. The paper will highlight several key collaborative projects with federal and industry stakeholders including System and Algorithm Development to Estimate Oil Thickness and Emulsification through an UAS Platform and Methods to Enhance Mechanical Recovery in Arctic Environments. Additionally, the paper will provide an update on the Development of a Low-emission Spray Combustion Burner to Cleanly Burn Emulsions where we partnered the Naval Research Laboratory and met with industry representatives to incorporate their needs in the final phases of the development effort.


Sign in / Sign up

Export Citation Format

Share Document