scholarly journals Evaluation on User Perception Effect based on Interaction Techniques in the Stereoscopic Environment

2021 ◽  
Vol 15 (2) ◽  
pp. 49-55
Author(s):  
Dino Caesaron ◽  
Rio Prasetyo Lukodono ◽  
Yunita Nugrahaini Safrudin

The interaction of user performance with three-dimensional (3D) objects has become an important issue in the recent development of virtual reality applications. Additionally, the basic conviction of current Virtual Reality (VR) supports the development of the viable interface between humans and machines. The research focuses on the user’s interaction technique by considering two approaches (direct and indirect interaction techniques) for the users while interacting with threedimensional objects. Numerous possible uses can benefit from virtual reality by considering a few fundamental visual and cognitive activities in the Virtual Environment (VE), such as the interpretation of space that users of clear and indirect perception are not well established. The experiment is performed in a stereoscopic environment using a reciprocal tapping task. Participants are expected to use direct pointing as well as indirect cursor techniques to select a stereoscopic spherical target. The results show that, in the sense of a direct interaction technique, user recognition of an object appears to converge in the center of a simulated area. Unfortunately, this convergence is not demonstrated in the indirect cursor situation. The pointing estimation from the users is more accurate when using the indirect interaction approach. The findings provide an understanding of the interaction characteristics done by the users in the stereoscopic environment. Importantly, developers of a virtual environment may use the result when developing effective user interface perception in specific interaction techniques.

2010 ◽  
Vol 129-131 ◽  
pp. 1296-1300
Author(s):  
Xue Jun Yue ◽  
Tian Sheng Hong ◽  
Xing Xu ◽  
Wei Bin Wu

We build out a very vivid and real conditions environment based on the virtual reality of a simulation computer technology, so that users can be in a virtual environment through the man-machine interface with a virtual environment direct interaction. This paper studies 3d virtual model development and integration including the system design to realize the functions and systems integration. In its implementation, we use the model tool, 3 D R C to structure three-dimensional model, and the model tool, and finally we use virtools tool to achieve a three-dimensional with the establishment of virtual scene model. Virtual reality (virtual reality, VR) is a computer senior man-machine interface on the basic feature of absorbion, interactiveness and constructiveness [1,2]. Specifically, virtual reality is a computer that creates the stereoscopic spaces, and users can interact space objects in the interaction and watch the operation of some part of the objects in space, and freely move with the users' will so that a sense of integration and participation are produced[3,4]. It is used in computer technology at the core of modern high technology ,which means to build a realistic view, hear and touch the integration of a virtual environment with the necessary equipment and a virtual environment of the interaction and mutual influence, which results in the "immersion" be true of the environment and feel[5-7]. VR technology is computer technology, computer graphics, computer simulations visual and technical, visual physiology, psychology, the microelectronics visual display technique, solid technology, sensing to measure the technical, technological, information technology, and voice recognition software engineering and technology, integrated man-machine the skill interfacing, and network technology and artificial intelligence technology and the achievement of other high technology. Since the birth of a virtual reality technology, it has the huge economy, military and the internet, multimedia minded race in many areas in the application of technology in the 21st century as the three-big technologies.


Author(s):  
Keonhee Park ◽  
Seongah Chin

Virtual reality head-mounted displays (VR HMDs) can provide three-dimensional content to increase user immersion that is relatively acceptable. However, user interactions with VR HMDs often do not satisfy a virtual environment. In this article, the authors propose a smart human–computer interaction technique that runs on VR HMDs and Leap Motion. In this study, they focused on improving interaction methods for VR content and providing interfaces and information necessary for the operation of such content in a user-friendly manner, which should reduce fatigue and dizziness. To this end, they propose and validate a smart head-up display that is suitable for VR content. Specifically, a virtual head-up display (HUD) was designed and optimized for use with VR HMD and Leap Motion technologies. Additionally, they present comparative performance tests to validate the proposed system.


Author(s):  
Tushar H. Dani ◽  
Rajit Gadh

Abstract Despite advances in Computer-Aided Design (CAD) and the evolution of the graphical user interfaces, rapid creation, editing and visualization of three-dimensional (3D) shapes remains a tedious task. Though the availability of Virtual Reality (VR)-based systems allows enhanced three-dimensional interaction and visualization, the use of VR for ab initio shape design, as opposed to ‘importing’ models from existing CAD systems, is a relatively new area of research. Of interest are computer-human interaction issues and the design and geometric tools for shape modeling in a Virtual Environment (VE). The focus of this paper is on the latter i.e. in defining the geometric tools required for a VR-CAD system and in describing a framework that meets those requirements. This framework, the Virtual Design Software Framework (VDSF) consists of the interaction and design tools, and an underlying geometric engine that provides the representation and algorithms required by these tools. The geometric engine called the Virtual Modeler uses a graph-based representation (Shape-Graph) for modeling the shapes created by the user. The Shape-Graph facilitates interactive editing by localizing the effect of editing operations and in addition provides constraint-based design and editing mechanisms that are useful in a 3D interactive virtual environment. The paper concludes with a description of the prototype system, called the Virtual Design Studio (VDS), that is currently being implemented.1.


2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


2005 ◽  
Vol 32 (5) ◽  
pp. 777-785 ◽  
Author(s):  
Ebru Cubukcu ◽  
Jack L Nasar

Discrepanices between perceived and actual distance may affect people's spatial behavior. In a previous study Nasar, using self report of behavior, found that segmentation (measured through the number of buildings) along the route affected choice of parking garage and path from the parking garage to a destination. We recreated that same environment in a three-dimensional virtual environment and conducted a test to see whether the same factors emerged under these more controlled conditions and to see whether spatial behavior in the virtual environment accurately reflected behavior in the real environment. The results confirmed similar patterns of response in the virtual and real environments. This supports the use of virtual reality as a tool for predicting behavior in the real world and confirms increases in segmentation as related to increases in perceived distance.


Author(s):  
Mikhail Mikhaylyuk ◽  
Andrey Maltsev ◽  
Evgeny Strashnov

This paper presents original solutions for creation of training complex learning cosmonauts to control a space jet pack on purpose self-rescue when emergency happens. An approach is proposed in which training is carried out in a virtual environment using virtual reality gloves and headset. The idea is that control of virtual space jet pack model is performed by interaction of virtual hands, copying movements of cosmonaut's hands, with three-dimensional model of jet pack's control panel. To implement the training complex, methods and approaches were developed for movement synchronization simulation of virtual and real hands, as well as simulation of jet pack's control panel and thrusters. Approbation of proposed methods and approaches was carried out as part of our virtual environment system VirSim developed at the SRISA RAS. Results obtained in the paper can be used to create training complex for learning cosmonauts to rescue when they accidentally separate from the International Space Station.


2008 ◽  
Vol 575-578 ◽  
pp. 709-715 ◽  
Author(s):  
Zhen Xin Liang ◽  
Jian Xun Zhang ◽  
Yi Pei

Gas shield Tungsten Arc Welding (GTAW) is an important process method in material processing. Welding quality is vital for the product quality. It is an important way to improve the quality of product by raising the personal capability and handling technique of the welder. In this investigation, because of the deficiency in classic welding training scheme, some new technology were introduced into classic welding training field to improve the efficiency of welding training and reduce training cost. A computer simulation system that can be used to train primary welders was developed combined with three-dimensional stereoscopic vision and API interface of OpenGL, virtual reality is the kernel technology. In this system, welders were trained not in the real operating environment but in the virtual environment where has experience personally effect that simulated by computer. There is unnecessary with welding material and welding energy in virtual environment simulated by computer. It has highly automatic and intelligent and lower required to welding teachers. It is healthy to welder because that intensive arc and harmful dust are disappeared in virtual environment. It is a lower cost and high efficiency method by use of virtual training system to training new welder.


Author(s):  
Hugo I. Medellín-Castillo ◽  
Germánico González-Badillo ◽  
Eder Govea ◽  
Raquel Espinosa-Castañeda ◽  
Enrique Gallegos

The technological growth in the last years have conducted to the development of virtual reality (VR) systems able to immerse the user into a three-dimensional (3D) virtual environment where the user can interact in real time with virtual objects. This interaction is mainly based on visualizing the virtual environment and objects. However, with the recent beginning of haptic systems, the interaction with the virtual world has been extended to also feel, touch and manipulate virtual objects. Virtual reality has been successfully used in the development of applications in different scientific areas ranging from basic sciences, social science, education and entertainment. On the other hand, the use of haptics has increased in the last decade in domains from sciences and engineering to art and entertainment. Despite many developments, there is still relatively little knowledge about the confluence of software, enabling hardware, visual and haptic representations, to enable the conditions that best provide for an immersive sensory environment to convey information about a particular subject domain. In this paper, the state of the art of the research work regarding virtual reality and haptic technologies carried out by the authors in the last years is presented. The aim is to evidence the potential use of these technologies to develop usable systems for analysis and simulation in different areas of knowledge. The development of three different systems in the areas of engineering, medicine and art is presented. In the area of engineering, a system for the planning, evaluation and training of assembly and manufacturing tasks has been developed. The system, named as HAMS (Haptic Assembly and Manufacturing System), is able to simulate assembly tasks of complex components with force feedback provided by the haptic device. On the other hand, in the area of medicine, a surgical simulator for planning and training orthognathic surgeries has been developed. The system, named as VOSS (Virtual Osteotomy Simulator System), allows the realization of virtual osteotomies with force feedback. Finally, in the area of art, an interactive cinema system for blind people has been developed. The system is able to play a 3D virtual movie for the blind user to listen to and touch by means of the haptic device. The development of these applications and the results obtained from these developments are presented and discussed in this paper.


2016 ◽  
Vol 78 (12-3) ◽  
Author(s):  
Arief Hydayat ◽  
Haslina Arshad ◽  
Nazlena Mohamad Ali ◽  
Lam Meng Chun

In a 3D user interface, interaction plays an important role in helping users to manipulate 3D objects in virtual environments. 3D devices, such as data glove and motion tracking, can potentially give users the opportunity to manipulate 3D objects in virtual reality environments such as checking, zooming, translating, rotating, merging and splitting 3D objects in a more natural and easy manner through the use of hand gestures. Hand gestures are often applied in 3D interaction techniques for converting the manipulation mode. This paper will discuss the interaction technique in a virtual environment using a combination of the Push and Pull navigation and the rotation technique. The unimanual use of these 3D interaction techniques can improve the effectiveness of users in their interaction with and manipulation of 3D objects. This study has enhanced the capability of the unimanual 3D interaction technique in terms of 3D interaction feedback in virtual environments.


Author(s):  
Michael Glueck ◽  
Azam Khan

AbstractVirtual three-dimensional (3-D) environments have become pervasive tools in a number of professional and recreational tasks. However, interacting with these environments can be challenging for users, especially as these environments increase in complexity and scale. In this paper, we argue that the design of 3-D interaction techniques is an ill-defined problem. This claim is elucidated through the context of data-rich and geometrically complex multiscale virtual 3-D environments, where unexpected factors can encumber intellection and navigation. We develop an abstract model to guide our discussion, which illustrates the cyclic relationship of understanding and navigating; a relationship that supports the iterative refinement of a consistent mental representation of the virtual environment. Finally, we highlight strategies to support the design of interactions in multiscale virtual environments, and propose general categories of research focus.


Sign in / Sign up

Export Citation Format

Share Document