A comparative study of 2D and 3D mobile keypad user interaction preferences in virtual reality graphic user interfaces

Author(s):  
Akriti Kaur ◽  
Pradeep G. Yammiyavar
Author(s):  
Karri Palovuori ◽  
Ismo Rakkolainen

Mid-air, walk-through fogscreens are frequently used in trade shows, theme parks, museums, concerts, etc. They enhance many kinds of entertainment experiences and are captivating for the audience. Currently, they are usually employed only as non-responsive, passive screens for “immaterial” walk-through special effects. Suitable sensors can turn fogscreens into interactive touch screens or walk-through virtual reality screens. Several interactive systems have been implemented. However, the cost and other features of 2D and 3D tracking sensors have been prohibitive for wider commercial adoption. This chapter presents a Microsoft Kinect-based 2D and 3D tracking for mid-air projection screens. Kinect cannot track through the fogscreen due to disturbances caused by fog. In addition to robust tracking and lower cost, the custom Kinect tracking also brings along other advantages such as possibilities for projector's hotspot removal, ballistic tracking, multi-user, multi-touch, and virtual reality setups, and novel user interfaces.


2011 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Felipe Carvalho ◽  
Daniela G. Trevisan ◽  
Alberto Raposo ◽  
Carla M.D.S. Freitas ◽  
Luciana Nedel

The idea of hybrid user interfaces (HUI) does not rely only on the use of different devices but also on different interactive environments with the goal of bringing together the advantages of each environment. The main challenge regarding the development of such systems is to know which are the design aspects that should be taken into account in order to promote smooth and continuous interactions. In this way our work reinforces the importance of interactions continuity and dimensional task congruence as design principles to guide the development and interaction analysis within HUI. An example scenario was conceived from splitting a previous single desktop application for 3D volume sculpture into three different interactive environments (Wimp, Augmented Reality and Head-Mounted Immersive Virtual Reality). To achieve such goal we employ the OpenInterface platform to allow the management of several modalities for user interaction within and along the three environments. Finally, we discuss the outcomes of the analysis of interactions within our HUI according to the design principles proposed.


2018 ◽  
pp. 1742-1761
Author(s):  
Karri Palovuori ◽  
Ismo Rakkolainen

Mid-air, walk-through fogscreens are frequently used in trade shows, theme parks, museums, concerts, etc. They enhance many kinds of entertainment experiences and are captivating for the audience. Currently, they are usually employed only as non-responsive, passive screens for “immaterial” walk-through special effects. Suitable sensors can turn fogscreens into interactive touch screens or walk-through virtual reality screens. Several interactive systems have been implemented. However, the cost and other features of 2D and 3D tracking sensors have been prohibitive for wider commercial adoption. This chapter presents a Microsoft Kinect-based 2D and 3D tracking for mid-air projection screens. Kinect cannot track through the fogscreen due to disturbances caused by fog. In addition to robust tracking and lower cost, the custom Kinect tracking also brings along other advantages such as possibilities for projector's hotspot removal, ballistic tracking, multi-user, multi-touch, and virtual reality setups, and novel user interfaces.


Author(s):  
Matthias Kraus ◽  
Hanna Schafer ◽  
Philipp Meschenmoser ◽  
Daniel Schweitzer ◽  
Daniel A. Keim ◽  
...  

2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-26
Author(s):  
Carlos Bermejo ◽  
Lik Hang Lee ◽  
Paul Chojecki ◽  
David Przewozny ◽  
Pan Hui

The continued advancement in user interfaces comes to the era of virtual reality that requires a better understanding of how users will interact with 3D buttons in mid-air. Although virtual reality owns high levels of expressiveness and demonstrates the ability to simulate the daily objects in the physical environment, the most fundamental issue of designing virtual buttons is surprisingly ignored. To this end, this paper presents four variants of virtual buttons, considering two design dimensions of key representations and multi-modal cues (audio, visual, haptic). We conduct two multi-metric assessments to evaluate the four virtual variants and the baselines of physical variants. Our results indicate that the 3D-lookalike buttons help users with more refined and subtle mid-air interactions (i.e. lesser press depth) when haptic cues are available; while the users with 2D-lookalike buttons unintuitively achieve better keystroke performance than the 3D counterparts. We summarize the findings, and accordingly, suggest the design choices of virtual reality buttons among the two proposed design dimensions.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


1985 ◽  
Vol 29 (5) ◽  
pp. 470-474 ◽  
Author(s):  
Paul Green ◽  
Lisa Wei-Haas

The Wizard of Oz technique is an efficient way to examine user interaction with computers and facilitate rapid iterative development of dialog wording and logic. The technique requires two machines linked together, one for the subject and one for the experimenter. In this implementation the experimenter (the “Wizard”), pretending to be a computer, types in complete replies to user queries or presses function keys to which common messages have been assigned (e.g., Fl=“Help is not available”). The software automatically records the dialog and its timing. This paper provides a detailed description of the first implementation of the Oz paradigm for the IBM Personal Computer. It also includes application guidelines, information which is currently missing from the literature.


Sign in / Sign up

Export Citation Format

Share Document