A Techinque Based on Muscular Activation for Interacting With Virtual Environments

Author(s):  
Paolo Belluco ◽  
Monica Bordegoni ◽  
Umberto Cugini

Interacting with computers by using the bodily motion is one of the challenging topics in the Virtual Reality field, especially as regards the interaction with large scale virtual environments. This paper presents a device for interacting with a Virtual Reality environment that is based on the detection of the muscular activity and movements of the user by the fusion of two different signals. The idea is that through muscular activities a user is capable of moving a cursor in the virtual space, and making some actions through gestures. The device is based on an accelerometer and on electromyography, a technique that derives from the medical field and that is able to recognize the electrical activity produced by skeletal muscles during their contraction. The device consists of cheap and easy to replicate components: seven electrodes pads and a small and wearable board for the acquisition of the sEMG signals from the user’s forearm, a 3 DOF accelerometer that is positioned on the user’s wrist (used for moving the cursor in the space) and a glove worn on the forearm in which these components are inserted. The device can be easily used without tedious settings and training. In order to test the functionality, performances and usability issues of the device we have implemented an application that has been tested by a group of users. Specifically, the device has been used as natural interaction technique in an application for drawing in a large scale virtual environment. The muscular activity is acquired by the device and used by the application for controlling the dimension and color of the brush.

2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


2020 ◽  
Vol 4 (4) ◽  
pp. 79
Author(s):  
Julian Kreimeier ◽  
Timo Götzelmann

Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.


1993 ◽  
Vol 2 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Martin R. Stytz ◽  
Elizabeth Block ◽  
Brian Soltz

As virtual environments grow in complexity, size, and scope users will be increasingly challenged in assessing the situation in them. This will occur because of the difficulty in determining where to focus attention and in assimilating and assessing the information as it floods in. One technique for providing this type of assistance is to provide the user with a first-person, immersive, synthetic environment observation post, an observatory, that permits unobtrusive observation of the environment without interfering with the activity in the environment. However, for large, complex synthetic environments this type of support is not sufficient because the mere portrayal of raw, unanalyzed data about the objects in the virtual space can overwhelm the user with information. To address this problem, which exists in both real and virtual environments, we are investigating the forms of situation awareness assistance needed by users of large-scale virtual environments and the ways in which a virtual environment can be used to improve situation awareness of real-world environments. A technique that we have developed is to allow a user to place analysis modules throughout the virtual environment. Each module provides summary information concerning the importance of the activity in its portion of the virtual environment to the user. Our prototype system, called the Sentinel, is embedded within a virtual environment observatory and provides situation awareness assistance for users within a large virtual environment.


Author(s):  
Shujie Deng ◽  
Julie A. Kirkby ◽  
Jian Chang ◽  
Jian Jun Zhang

The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.


Author(s):  
João Martinho Moura ◽  
Né Barros ◽  
Paulo Ferreira-Lopes

Virtual reality (VR) has been a prominent idea for exploring new worlds beyond the physical, and in recent decades, it has evolved in many aspects. The notion of immersion and the sense of presence in VR gained new definitions as technological advances took place. However, even today, we can question whether the degrees of immersion achieved through this technology are profound and felt. A fundamental aspect is the sense of embodiment in the virtual space. To what extent do we feel embodied in virtual environments? In this publication, the authors present works that challenge and question the embodiment sensation in VR, specifically in the artistic aspect. They present initial reflections about embodiment in virtuality and analyze the technologies adopted in creating interactive artworks prepared for galleries and theater stage, questioning the sensations caused by the visual embodiment in virtual reality under the perspective of both the audience and the performer.


Perception ◽  
2020 ◽  
Vol 49 (9) ◽  
pp. 940-967
Author(s):  
Ilja T. Feldstein ◽  
Felix M. Kölsch ◽  
Robert Konrad

Virtual reality systems are a popular tool in behavioral sciences. The participants’ behavior is, however, a response to cognitively processed stimuli. Consequently, researchers must ensure that virtually perceived stimuli resemble those present in the real world to ensure the ecological validity of collected findings. Our article provides a literature review relating to distance perception in virtual reality. Furthermore, we present a new study that compares verbal distance estimates within real and virtual environments. The virtual space—a replica of a real outdoor area—was displayed using a state-of-the-art head-mounted display. Investigated distances ranged from 8 to 13 m. Overall, the results show no significant difference between egocentric distance estimates in real and virtual environments. However, a more in-depth analysis suggests that the order in which participants were exposed to the two environments may affect the outcome. Furthermore, the study suggests that a rising experience of immersion leads to an alignment of the estimated virtual distances with the real ones. The results also show that the discrepancy between estimates of real and virtual distances increases with the incongruity between virtual and actual eye heights, demonstrating the importance of an accurately set virtual eye height.


2021 ◽  
Vol 11 (16) ◽  
pp. 7546
Author(s):  
Katashi Nagao ◽  
Kaho Kumon ◽  
Kodai Hattori

In building-scale VR, where the entire interior of a large-scale building is a virtual space that users can walk around in, it is very important to handle movable objects that actually exist in the real world and not in the virtual space. We propose a mechanism to dynamically detect such objects (that are not embedded in the virtual space) in advance, and then generate a sound when one is hit with a virtual stick. Moreover, in a large indoor virtual environment, there may be multiple users at the same time, and their presence may be perceived by hearing, as well as by sight, e.g., by hearing sounds such as footsteps. We, therefore, use a GAN deep learning generation system to generate the impact sound from any object. First, in order to visually display a real-world object in virtual space, its 3D data is generated using an RGB-D camera and saved, along with its position information. At the same time, we take the image of the object and break it down into parts, estimate its material, generate the sound, and associate the sound with that part. When a VR user hits the object virtually (e.g., hits it with a virtual stick), a sound is generated. We demonstrate that users can judge the material from the sound, thus confirming the effectiveness of the proposed method.


2016 ◽  
Vol 15 (2) ◽  
pp. 18-29
Author(s):  
Andrew Ray

Virtual environments (VEs) demonstrate the immense potential computer technology can provide to society. VEs have been created for almost two decades, but standardized tools and procedures for their creation do not exist. Numerous efforts to create tools for creating VEs have come and gone, but there is little consensus among tool creators for establishing a common subset of standard features that developers can expect. Currently, developers use one of many Virtual Reality (VR) toolkits to create a VE. However, VR toolkits are problematic when it comes to interoperability between applications and other VR toolkits. This paper investigates why the development tools are in this state. A discussion on the history of VR toolkits and developer experiences is used to show what developers face when they create a VE. Next, Three Dimensional Interaction Technique (3DIT) toolkits are introduced to show a new way of developing some parts of VEs. Lastly, a vision for the future of VE development that may help improve the next generation of toolkits is presented.


2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Muhammad Nur Affendy Nor'a ◽  
Ajune Wanis Ismail

Application that adopts collaborative system allows multiple users to interact with other users in the same virtual space either in Virtual Reality (VR) or Augmented Reality (AR). This paper aims to integrate the VR and AR space in a Collaborative User Interface that enables the user to cooperate with other users in a different type of interfaces in a single shared space manner. The gesture interaction technique is proposed as the interaction tool in both of the virtual spaces as it can provide a more natural gesture interaction when interacting with the virtual object. The integration of VR and AR space provide a cross-discipline shared data interchange through the network protocol of client-server architecture.


2001 ◽  
Vol 10 (1) ◽  
pp. 22-34 ◽  
Author(s):  
Roger Hubbold ◽  
Jon Cook ◽  
Martin Keates ◽  
Simon Gibson ◽  
Toby Howard ◽  
...  

This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms one component of a complete VR operating system. We give an overview of the architecture of MAVERIK, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism that supports an object-oriented framework of classes, objects, and methods. Examples are given to illustrate different uses of the system and typical performance levels.


Sign in / Sign up

Export Citation Format

Share Document