A Comparative Analysis of 3D User Interaction: How to Move Virtual Objects in Mixed Reality

Author(s):  
Hyo Jeong Kang ◽  
Jung-hye Shin ◽  
Kevin Ponto
2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.


2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Mohamad Yahya Fekri Aladin ◽  
Ajune Wanis Ismail

Mixed Reality (MR) is the next evolution of humans interacting with computer as MR can combine the physical environment and digital environment and making them coexist with each other [1]. Interaction is still a valid research area in MR, and this paper focuses on interaction rather than other research areas such as tracking, calibration, and display [2] because the current interaction technique still not intuitive enough to let the user interact with the computer. This paper explores the user interaction using gesture and speech interaction for 3D object manipulation in mixed reality environment. The paper explains the design stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, MR prototype is proposed to integrate the interaction technique using gesture and speech.  The paper concludes with results and discussion.


2012 ◽  
Vol 22 ◽  
pp. 90-97 ◽  
Author(s):  
Josefina García-Guerrero ◽  
Juan González-Calleros ◽  
Jean Vanderdonckt

Task Models describe how to perform activities to reach users' goals. Task models represent the intersection between user interface design and more systematic approaches. Task models can be represented at various abstraction levels. When designers want to specify only requirements regarding how activities should be performed, they consider only the main high-level tasks. On the other hand, when designers aim to provide precise design indications then the activities are represented at a small granularity, thus including aspects related to the dialogue model of a user interface (which defines how system and user actions can be sequenced). In this paper a comparative analysis of selected models involving multiple users in an interaction is provided in order to identify concepts which are underexplored in today's multi-user interaction task modeling. This comparative analysis is based on three families of criteria: information criteria, conceptual coverage, and expressiveness. Merging the meta-models of the selected models enables to come up with a broader meta-model that could be instantiated in most situations involving multi-user interaction, like workflow information systems, CSCW.


2020 ◽  
Vol 10 (3) ◽  
pp. 1135 ◽  
Author(s):  
Mulun Wu ◽  
Shi-Lu Dai ◽  
Chenguang Yang

This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.


Sign in / Sign up

Export Citation Format

Share Document