scholarly journals Immersive Gesture Interfaces for Navigation of 3D Maps in HMD-Based Mobile Virtual Environments

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.

Author(s):  
Florian Hruby ◽  
Irma Castellanos ◽  
Rainer Ressl

Abstract Scale has been a defining criterion of mapmaking for centuries. However, this criterion is fundamentally questioned by highly immersive virtual reality (VR) systems able to represent geographic environments at a high level of detail and, thus, providing the user with a feeling of being present in VR space. In this paper, we will use the concept of scale as a vehicle for discussing some of the main differences between immersive VR and non-immersive geovisualization products. Based on a short review of diverging meanings of scale we will propose possible approaches to the issue of both spatial and temporal scale in immersive VR. Our considerations shall encourage a more detailed treatment of the specific characteristics of immersive geovisualization to facilitate deeper conceptual integration of immersive and non-immersive visualization in the realm of cartography.


2010 ◽  
Vol 2 (5) ◽  
Author(s):  
Nuno Rodrigues ◽  
Luís Magalhães ◽  
João Paulo Moura ◽  
Alan Chalmers ◽  
Filipe Santos ◽  
...  

The manual creation of virtual environments is a demanding and costly task. With the increasing demand for more complex models in different areas, such as the design of virtual worlds, video games and computer animated movies the need to generate them automatically has become more necessary than ever.This paper presents a framework for the automatic generation of houses based on architectural rules. This approach has some innovating features, including the implementation of architectural rules, and produces 2D floor plans as well as complete 3D models, with a high level of detail, in just a few seconds. To evaluate the framework two different applications were developed and the output models were tested for different fields of application (e.g. virtual worlds). The results obtained contain evidences that the proposed framework may lead to the development of several specific applications to produce accurate 3D models of houses representing different realities (e.g. civilizations, epochs, etc.).


2019 ◽  
Vol 11 (14) ◽  
pp. 3894
Author(s):  
Fabrice Monna ◽  
Nicolas Navarro ◽  
Jérôme Magail ◽  
Rodrigue Guillon ◽  
Tanguy Rolland ◽  
...  

Photospheres, or 360° photos, offer valuable opportunities for perceiving space, especially when viewed through head-mounted displays designed for virtual reality. Here, we propose to take advantage of this potential for archaeology and cultural heritage, and to extend it by augmenting the images with existing documentation, such as 2D maps or 3D models, resulting from research studies. Photospheres are generally produced in the form of distorted equirectangular projections, neither georeferenced nor oriented, so that any registration of external documentation is far from straightforward. The present paper seeks to fill this gap by providing simple practical solutions, based on rigid and non-rigid transformations. Immersive virtual environments augmented by research materials can be very useful to contextualize archaeological discoveries, and to test research hypotheses, especially when the team is back at the laboratory. Colleagues and the general public can also be transported to the site, almost physically, generating an authentic sense of presence, which greatly facilitates the contextualization of the archaeological information gathered. This is especially true with head-mounted displays, but the resulting images can also be inspected using applications designed for the web, or viewers for smartphones, tablets and computers.


2017 ◽  
Vol 10 (5) ◽  
Author(s):  
Thorsten Roth ◽  
Martin Weier ◽  
André Hinkenjann ◽  
Yongmin Li ◽  
Philipp Slusallek

This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated ren- dering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


Author(s):  
Daniela Janssen ◽  
Christian Tummel ◽  
Anja Richert ◽  
Ingrid Isenhardt

<p class="Abstract"><span lang="EN-US">In light of the increasing technological developments, working life and education is changing and becoming more complex, interconnected and digital. These changed circumstances require new and modified competences of future employees. Education has to respond to the changing requirements in working life. To prepare for this, a technological-oriented teaching and learning process as well as gaining practical experience is crucial for students. In this context, Virtual Reality (VR) technologies provide new opportunities for practical experience in higher education, where they can further intensify the students learning experiences to a more immersive and engaging involvement in the learning process. To evaluate the potential of immersive virtual learning environments (VLE) for higher education and to understand more deeply which kind of experiences students gain while learning in immersive virtual environments (VE) an experimental research study is carried out. The paper describes education in light of industry 4.0 first and gives an overall view of immersive learning and the role of VR Technologies. Then the user study to measure user experience (UX) in immersive VLE is presented. Preliminary results are outlined and discussed with a view of further research.</span></p>


Author(s):  
Hyunmin Cheong ◽  
Wei Li ◽  
Francesco Iorio

This paper presents a novel application of gamification for collecting high-level design descriptions of objects. High-level design descriptions entail not only superficial characteristics of an object, but also function, behavior, and requirement information of the object. Such information is difficult to obtain with traditional data mining techniques. For acquisition of high-level design information, we investigated a multiplayer game, “Who is the Pretender?” in an offline context. Through a user study, we demonstrate that the game offers a more fun, enjoyable, and engaging experience for providing descriptions of objects than simply asking people to list them. We also show that the game elicits more high-level, problem-oriented requirement descriptions and less low-level, solution-oriented structure descriptions due to the unique game mechanics that encourage players to describe objects at an abstract level. Finally, we present how crowdsourcing can be used to generate game content that facilitates the gameplay. Our work contributes towards acquiring high-level design knowledge that is essential for developing knowledge-based CAD systems.


2018 ◽  
Author(s):  
D. Kuhner ◽  
L.D.J. Fiederer ◽  
J. Aldinger ◽  
F. Burget ◽  
M. Völker ◽  
...  

AbstractAs autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of the robotic tasks and the environment. Traditional control modalities as touch, speech or gesture commands are not necessarily suited for all users. While non-expert users can make the effort to familiarize themselves with a robotic system, paralyzed users may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: non-invasive neuronal signal recording and co-adaptive deep learning which form the brain-computer interface (BCI), high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level planning using referring expressions and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.


Author(s):  
M. Doležal ◽  
M. Vlachos ◽  
M. Secci ◽  
S. Demesticha ◽  
D. Skarlatos ◽  
...  

<p><strong>Abstract.</strong> Underwater archaeological discoveries bring new challenges to the field, but such sites are more difficult to reach and, due to natural influences, they tend to deteriorate fast. Photogrammetry is one of the most powerful tools used for archaeological fieldwork. Photogrammetric techniques are used to document the state of the site in digital form for later analysis, without the risk of damaging any of the artefacts or the site itself. To achieve best possible results with the gathered data, divers should come prepared with the knowledge of measurements and photo capture methods. Archaeologists use this technology to record discovered arteacts or even the whole archaeological sites. Data gathering underwater brings several problems and limitations, so specific steps should be taken to get the best possible results, and divers should well be prepared before starting work at an underwater site. Using immersive virtual reality, we have developed an educational software to introduce maritime archaeology students to photogrammetry techniques. To test the feasibility of the software, a user study was performed and evaluated by experts. In the software, the user is tasked to put markers on the site, measure distances between them, and then take photos of the site, from which the 3D mesh is generated offline. Initial results show that the system is useful for understanding the basics of underwater photogrammetry.</p>


Sign in / Sign up

Export Citation Format

Share Document