Ocular Dominance Effects on the Application of Monocular, Occluding Head-Mounted Displays

Author(s):  
David E. Kancler ◽  
Laurie L. Quill

This study investigates the effects of ocular dominance when maintenance procedures are presented on a monocular, occluding head-mounted display (HMD). While previous research has not revealed significant effects associated with ocular dominance and the use of a monocular, occluding HMD, most of this research has occurred in the cockpit environment. By nature, this setting involves continually changing (or dynamic) environmental information, such as target location or altitude. By contrast, the aircraft maintenance environment is static; the technician is not required to process dynamic environmental information. As the Air Force studies the feasibility of presenting maintenance procedures on HMDs, research efforts must thoroughly address questions pertaining to the use of these devices, such as potential effects of ocular dominance. The current study addresses the effect of ocular dominance on performance times, subjective workload ratings, self reports, and preference rankings. Consistent with previous research, ocular dominance did not have a significant effect on any of the dependent measures. However, order of presentation (dominant eye before non-dominant eye vs. dominant eye after non-dominant eye) did provide some differences in performance times and workload scores. Explanations for these differences are discussed.

2006 ◽  
Vol 5 (3) ◽  
pp. 33-39 ◽  
Author(s):  
Seokhee Jeon ◽  
Hyeongseop Shim ◽  
Gerard J. Kim

In this paper, we have investigated the comparative usability among three different viewing configurations of augmented reality (AR) system that uses a desktop monitor instead of a head mounted display. In many cases, due to operational or cost reasons, the use of head mounted displays may not be viable. Such a configuration is bound to cause usability problems because of the mismatch in the user's proprioception, scale, hand eye coordination, and the reduced 3D depth perception. We asked a pool of subjects to carry out an object manipulation task in three different desktop AR set ups. We measured the subject's task performance and surveyed for the perceived usability and preference. Our results indicated that placing a fixed camera in the back of the user was the best option for convenience and attaching a camera on the user�s head for task performance. The results should provide a valuable guide for designing desktop augmented reality systems without head mounted displays


2014 ◽  
Vol 23 (3) ◽  
pp. 253-266 ◽  
Author(s):  
Daniele Leonardis ◽  
Antonio Frisoli ◽  
Michele Barsotti ◽  
Marcello Carrozzino ◽  
Massimo Bergamasco

This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements. In particular, we analyze the effect of combined vestibular and proprioceptive afferent signals on the perceived embodiment within an immersive walking scenario. These feedback signals were applied by means of a motion platform and by tendon vibration of lower limbs, evoking illusory leg movements. Vestibular and proprioceptive feedback were provided congruently with a rich virtual scenario reconstructing a real city, rendered on a head-mounted display (HMD). The sense of embodiment was evaluated through both self-reported questionnaires and physiological measurements in two experimental conditions: with all active sensory feedback (highly embodied condition), and with visual feedback only. Participants' self-reports show that the addition of both vestibular and proprioceptive feedback increases the sense of embodiment and the individual's feeling of presence associated with the walking experience. Furthermore, the embodiment condition significantly increased the measured galvanic skin response and respiration rate. The obtained results suggest that vestibular and proprioceptive feedback can improve the participant's sense of embodiment in the virtual experience.


Author(s):  
Thiago D'Angelo ◽  
Saul Emanuel Delabrida Silva ◽  
Ricardo A. R. Oliveira ◽  
Antonio A. F. Loureiro

Virtual Reality and Augmented Reality Head-Mounted Displays (HMDs) have been emerging in the last years. These technologies sound like the new hot topic for the next years. Head-Mounted Displays have been developed for many different purposes. Users have the opportunity to enjoy these technologies for entertainment, work tasks, and many other daily activities. Despite the recent release of many AR and VR HMDs, two major problems are hindering the AR HMDs from reaching the mainstream market: the extremely high costs and the user experience issues. In order to minimize these problems, we have developed an AR HMD prototype based on a smartphone and on other low-cost materials. The prototype is capable of running Eye Tracking algorithms, which can be used to improve user interaction and user experience. To assess our AR HMD prototype, we choose a state-of-the-art method for eye center location found in the literature and evaluate its real-time performance in different development boards.


2013 ◽  
Vol 22 (2) ◽  
pp. 171-190 ◽  
Author(s):  
Michele Fiorentino ◽  
Saverio Debernardis ◽  
Antonio E. Uva ◽  
Giuseppe Monno

The application of augmented reality in industrial environments requires an effective visualization of text on a see-through head-mounted display (HMD). The main contribution of this work is an empirical study of text styles as viewed through a monocular optical see-through display on three real workshop backgrounds, examining four colors and four different text styles. We ran 2,520 test trials with 14 participants using a mixed design and evaluated completion time and error rates. We found that both presentation mode and background influence the readability of text, but there is no interaction effect between these two variables. Another interesting aspect is that the presentation mode differentially influences completion time and error rate. The present study allows us to draw some guidelines for an effective use of AR text visualization in industrial environments. We suggest maximum contrast when reading time is important, and the use of colors to reduce errors. We also recommend a colored billboard with transparent text where colors have a specific meaning.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6623
Author(s):  
Luisa Lauer ◽  
Kristin Altmeyer ◽  
Sarah Malone ◽  
Michael Barz ◽  
Roland Brünken ◽  
...  

Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning Microsoft’s HoloLens 2 may address this prevailing issue. In a within-subjects study with 47 elementary school children (2nd to 6th grade), we examined the usability of the HoloLens 2 using a standardized tutorial on multimodal interaction in AR. The overall system usability was rated “good”. However, several behavioral metrics indicated that specific interaction modes differed in their efficiency. The results are of major importance for the development of learning applications in HMD-AR as they partially deviate from previous findings. In particular, the well-functioning recognition of children’s voice commands that we observed represents a novelty. Furthermore, we found different interaction preferences in HMD-AR among the children. We also found the use of HMD-AR to have a positive effect on children’s activity-related achievement emotions. Overall, our findings can serve as a basis for determining general requirements, possibilities, and limitations of the implementation of educational HMD-AR environments in elementary school classrooms.


Author(s):  
Eric G. Hintz ◽  
Michael D. Jones ◽  
M. Jeannette Lawler ◽  
Nathan Bench ◽  
Fred Mangrubang

Accommodating the planetarium experience to members of the deaf or hard-of-hearing community has often created situations that are either disruptive to the rest of the audience or provide an insufficient accommodation. To address this issue, we examined the use of head-mounted displays to deliver an American Sign Language sound track to learners in the planetarium Here we present results from a feasibility study to see if an ASL sound track delivered through a head-mount display can be understood by deaf junior to senior high aged students who are fluent in ASL. We examined the adoption of ASL classifiers that were used as part of the sound track for a full dome planetarium show. We found that about 90% of all students in our sample adopted at least one classifier from the show. In addition, those who viewed the sound track in a head-mounted display did at least as well as those who saw the sound track projected directly on the dome. These results suggest that ASL transmitted through head-mounted displays is a promising method to help improve learning for those whose primary language is ASL and merits further investigation.


2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4956
Author(s):  
Jose Llanes-Jurado ◽  
Javier Marín-Morales ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms


2008 ◽  
Vol 17 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Peter Willemsen ◽  
Amy A. Gooch ◽  
William B. Thompson ◽  
Sarah H. Creem-Regehr

Several studies from different research groups investigating perception of absolute, egocentric distances in virtual environments have reported a compression of the intended size of the virtual space. One potential explanation for the compression is that inaccuracies and cue conflicts involving stereo viewing conditions in head mounted displays result in an inaccurate absolute scaling of the virtual world. We manipulate stereo viewing conditions in a head mounted display and show the effects of using both measured and fixed inter-pupilary distances, as well as bi-ocular and monocular viewing of graphics, on absolute distance judgments. Our results indicate that the amount of compression of distance judgments is unaffected by these manipulations. The equivalent performance with stereo, bi-ocular, and monocular viewing suggests that the limitations on the presentation of stereo imagery that are inherent in head mounted displays are likely not the source of distance compression reported in previous virtual environment studies.


i-Perception ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 204166951984139 ◽  
Author(s):  
Miguel A. García-Pérez ◽  
Eli Peli

Classical sighting or sensory tests are used in clinical practice to identify the dominant eye. Several psychophysical tests were recently proposed to quantify the magnitude of dominance but whether their results agree was never investigated. We addressed this question for the two most common psychophysical tests: The perceived-phase test, which measures the cyclopean appearance of dichoptically presented sinusoids of different phase, and the coherence-threshold test, which measures interocular differences in motion perception when signal and noise stimuli are presented dichoptically. We also checked for agreement with three classical tests (Worth 4-dot, Randot suppression, and Bagolini lenses). Psychophysical tests were administered in their conventional form and also using more dependable psychophysical methods. The results showed weak correlations between psychophysical measures of strength of dominance with inconsistent identification of the dominant eye across tests: Agreement on left-eye dominance, right-eye dominance, or nondominance by both tests occurred only for 11 of 40 observers (27.5%); the remaining 29 observers were classified differently by each test, including 14 cases (35%) of opposite classification (left-eye dominance by one test and right-eye dominance by the other). Classical tests also yielded conflicting results that did not agree well with classification based on psychophysical tests. The results are discussed in the context of determination of ocular dominance for clinical decisions.


Sign in / Sign up

Export Citation Format

Share Document