scholarly journals An Aerial Mixed-Reality Environment for First-Person-View Drone Flying

2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.

2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2003 ◽  
Vol 12 (6) ◽  
pp. 615-628 ◽  
Author(s):  
Benjamin Lok ◽  
Samir Naik ◽  
Mary Whitton ◽  
Frederick P. Brooks

Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But does having every object being virtual inhibit the interactivity and level of immersion? If participants spend most of their time and cognitive load on learning and adapting to interacting with virtual objects, does this reduce the effectiveness of the VE? We conducted a study that investigated how handling real objects and self-avatar visual fidelity affects performance and sense of presence on a spatial cognitive manual task. We compared participants' performance of a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects. There was no signifi-cant difference in reported sense of presence, regardless of the self-avatar's visual fidelity or the presence of real objects.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1623
Author(s):  
Kwang-seong Shin ◽  
Howon Kim ◽  
Jeong gon Lee ◽  
Dongsik Jo

With continued technological innovations in the fields of mixed reality (MR), wearable type MR devices, such as head-mounted display (HMD), have been released and are frequently used in various fields, such as entertainment, training, education, and shopping. However, because each product has different parts and specifications in terms of design and manufacturing process, users feel that the virtual objects overlaying real environments in MR are visualized differently, depending on the scale and color used by the MR device. In this paper, we compare the effect of scale and color parameters on users’ perceptions in using different types of MR devices to improve their MR experiences in real life. We conducted two experiments (scale and color), and our experimental study showed that the subjects who participated in the scale perception experiment clearly tended to underestimate virtual objects, in comparison with real objects, and overestimated color in MR environments.


Author(s):  
Kwang-seong Shin ◽  
Howon Kim ◽  
Dongsik Jo

With continued technological innovation in the fields of mixed reality (MR), wearable-type MR devices, such as helmets, have been released and are frequently used in various fields, such as entertainment, training, and education. However, because each product has different parts and specifications in terms of the design and manufacturing process, users feel that the virtual objects overlaying real environments in MR are visualized differently depending on the scale and color used by the MR device. In this paper, we compare the effect of scale and color parameters on users’ perception in using different types of MR devices to improve MR experience. We conducted two experiments (scale and color), and our experimental study showed that the subjects who participated in the scale perception experiment clearly tended to underestimate virtual objects, compared with real objects, and overestimate color in MR environments. [MM1]Please confirm meaning.


2021 ◽  
Vol 21 (1) ◽  
pp. 15-29
Author(s):  
Lidiane Pereira ◽  
Wellingston C. Roberti Junior ◽  
Rodrigo L. S. Silva

In Augmented Reality systems, virtual objects are combined with real objects, both three dimensional, interactively and at run-time. In an ideal scenario, the user has the feeling that real and virtual objects coexist in the same space and is unable to differentiate the types of objects from each other. To achieve this goal, research on rendering techniques have been conducted in recent years. In this paper, we present a Systematic Literature Review aiming to identify the main characteristics concerning photorealism in Mixed and Augmented Reality systems to find research opportunities that can be further exploited or optimized. The objective is to verify if exists a definition of photorealism in Mixed and Augmented Reality. We present a theoreticalfundamental over the most used methods concerning realism in Computer Graphics. Also, we want to identify the most used methods and tools to enable photorealism in Mixed and Augmented Reality systems.


Author(s):  
Nicoletta Sala

Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are three different technologies developed in the last decades of the 20th century. They combine hardware and software solutions. They permit the creation of three-dimensional (3D) virtual worlds and virtual objects. This chapter describes how VR, MR, and AR technologies find positive application fields in educational environments. They support different learning styles, offering potential help in teaching and in learning paths.


2021 ◽  
Vol 1 (1) ◽  
pp. 10-22
Author(s):  
N.V. Zenyutkin ◽  
D.I. Kovalev ◽  
E.V. Tuev ◽  
E.V. Tueva

The paper discusses the well-known concepts of matter, objects, environments, information and information field in relation to the proposed information representation of the interaction of objects and environments based on the description of their information structures. An overview of the methods of forming information structures for modeling objects, environments and processes is given. It is shown that without using the concept of the environment, as an entity in which objects exist and are reflected in time and in a certain space, it is not always possible to accurately describe the change in the state of an object. This is due to the fact that the current state of an object depends both on the action of other objects on it and on its ability to reflect a specific physical or other environment in which it and other objects exist. From the presented review of approaches to object-oriented methods of modeling objects, environments and processes, it can be concluded that it is fundamentally important to take into account the physical and spatial environment in which they interact in object methods of modeling and programming complex objects and systems. Using programming languages, we can create and even manage the information structures of objects and environments. They are called abstract because they are not informational representations of the corresponding physical objects and environments. An example of such capabilities are special programming languages ​​for creating virtual objects and virtual worlds. Created on the basis of special programming languages, virtual simulators help specialists of various professions acquire skills in managing real objects, technological processes, etc.


2021 ◽  
pp. 1-18
Author(s):  
Maxim Igorevich Sorokin ◽  
Dmitry Dmitrievich Zhdanov ◽  
Ildar Vagizovich Valiev

The paper examines the causes of visual discomfort in mixed reality systems and algorithmic solutions that eliminate one of the main causes of discomfort, namely, the mismatch between the lighting conditions of objects in the real and virtual worlds. To eliminate this cause of discomfort, the algorithm is proposed, which consists in constructing groups of shadow rays from points on the boundaries of shadows to points on the boundaries of objects. Part of the rays corresponding to the real lighting conditions form caustics in area of the real light source, which makes it possible to determine the source of illumination of virtual objects for their correct embedding into the mixed reality system. Convolutional neural networks and computer vision algorithms were used to classify shadows in the image. Examples of reconstructing the coordinates of a light source from RGBD data are presented.


Disputatio ◽  
2019 ◽  
Vol 11 (55) ◽  
pp. 345-369
Author(s):  
Peter Ludlow

AbstractDavid Chalmers argues that virtual objects exist in the form of data structures that have causal powers. I argue that there is a large class of virtual objects that are social objects and that do not depend upon data structures for their existence. I also argue that data structures are themselves fundamentally social objects. Thus, virtual objects are fundamentally social objects.


Sign in / Sign up

Export Citation Format

Share Document