A realtime mixed reality system for seamless interaction between real and virtual objects

Author(s):  
Achilleas Anagnostopoulos ◽  
Aristodemos Pnevmatikakis
2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.


2020 ◽  
Vol 10 (3) ◽  
pp. 1135 ◽  
Author(s):  
Mulun Wu ◽  
Shi-Lu Dai ◽  
Chenguang Yang

This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.


Author(s):  
Jaakko Konttinen ◽  
Charles E. Hughes ◽  
Sumanta N. Pattanaik

Military training, concept design, and pre-acquisition studies often are carried out in virtual settings in which one can experience that which is, in the real world, too dangerous, too costly, or even beyond current technology. Purely virtual environments, however, have limitations in that they remove the participant from the physical world with its visual, auditory, and tactile complexities. In contrast, mixed reality (MR) seeks to blend the real and synthetic. How well that blending works is critical to the effectiveness of a user's experience within an MR scenario. The focus of this paper is on the visual aspects of this blending or more specifically on the interactions between the real and virtual in the contexts of proper inter-occlusion, illumination, and inter-shadowing. This means that the virtual objects must react properly to changes in real lighting and that the real must react properly to the insertion of virtual lights (e.g., a virtual flashlight or a simulated change in the time of day). Even more challenging, virtual objects must cast shadows on real objects and vice versa. The proper casting of shadows is critical to military training, in that shadows often provide clues of others' movements, and of our own to others, long before visual contact is made. Realistic shadows can improve training greatly; their omission or the insertion of physically incorrect shadowing can lead to negative training. To be effective, visual realism requires all such interactions occur at interactive rates (30+ frames per second). Our research focuses on algorithmic development and implementation of these procedures on programmable graphics units (GPUs) found commonly on today's commodity graphics cards. The algorithms we develop are tailored to take advantage of the parallel pipeline architecture of GPUs. Our primary application is training of dismounted infantry for the complexities of military operations in urban terrain (MOUT).


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1623
Author(s):  
Kwang-seong Shin ◽  
Howon Kim ◽  
Jeong gon Lee ◽  
Dongsik Jo

With continued technological innovations in the fields of mixed reality (MR), wearable type MR devices, such as head-mounted display (HMD), have been released and are frequently used in various fields, such as entertainment, training, education, and shopping. However, because each product has different parts and specifications in terms of design and manufacturing process, users feel that the virtual objects overlaying real environments in MR are visualized differently, depending on the scale and color used by the MR device. In this paper, we compare the effect of scale and color parameters on users’ perceptions in using different types of MR devices to improve their MR experiences in real life. We conducted two experiments (scale and color), and our experimental study showed that the subjects who participated in the scale perception experiment clearly tended to underestimate virtual objects, in comparison with real objects, and overestimated color in MR environments.


Author(s):  
Kwang-seong Shin ◽  
Howon Kim ◽  
Dongsik Jo

With continued technological innovation in the fields of mixed reality (MR), wearable-type MR devices, such as helmets, have been released and are frequently used in various fields, such as entertainment, training, and education. However, because each product has different parts and specifications in terms of the design and manufacturing process, users feel that the virtual objects overlaying real environments in MR are visualized differently depending on the scale and color used by the MR device. In this paper, we compare the effect of scale and color parameters on users’ perception in using different types of MR devices to improve MR experience. We conducted two experiments (scale and color), and our experimental study showed that the subjects who participated in the scale perception experiment clearly tended to underestimate virtual objects, compared with real objects, and overestimate color in MR environments. [MM1]Please confirm meaning.


Sign in / Sign up

Export Citation Format

Share Document