scholarly journals MonoMR: Synthesizing Pseudo-2.5D Mixed Reality Content from Monocular Videos

2021 ◽  
Vol 11 (17) ◽  
pp. 7946
Author(s):  
Dong-Hyun Hwang ◽  
Hideki Koike

MonoMR is a system that synthesizes pseudo-2.5D content from monocular videos for mixed reality (MR) head-mounted displays (HMDs). Unlike conventional systems that require multiple cameras, the MonoMR system can be used by casual end-users to generate MR content from a single camera only. In order to synthesize the content, the system detects people in the video sequence via a deep neural network, and then the detected person’s pseudo-3D position is estimated by our proposed novel algorithm through a homography matrix. Finally, the person’s texture is extracted using a background subtraction algorithm and is placed on an estimated 3D position. The synthesized content can be played in MR HMD, and users can freely change their viewpoint and the content’s position. In order to evaluate the efficiency and interactive potential of MonoMR, we conducted performance evaluations and a user study with 12 participants. Moreover, we demonstrated the feasibility and usability of the MonoMR system to generate pseudo-2.5D content using three example application scenarios.

2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


Author(s):  
Stefan Bittmann

Virtual reality (VR) is the term used to describe representation and perception in a computer-generated, virtual environment. The term was coined by author Damien Broderick in his 1982 novel “The Judas Mandala". The term "Mixed Reality" describes the mixing of virtual reality with pure reality. The term "hyper-reality" is also used. Immersion plays a major role here. Immersion describes the embedding of the user in the virtual world. A virtual world is considered plausible if the interaction is logical in itself. This interactivity creates the illusion that what seems to be happening is actually happening. A common problem with VR is "motion sickness." To create a sense of immersion, special output devices are needed to display virtual worlds. Here, "head-mounted displays", CAVE and shutter glasses are mainly used. Input devices are needed for interaction: 3D mouse, data glove, flystick as well as the omnidirectional treadmill, with which walking in virtual space is controlled by real walking movements, play a role here.


2021 ◽  
Author(s):  
Amel Yaddaden ◽  
Guillaume Spalla ◽  
Charles Gouin-Vallerand ◽  
Patty Semeniuk ◽  
Nathalie Bier

BACKGROUND Mixed reality is an emerging technology allowing to "blend" virtual objects in the actual user's environment. A way to realize this is by using head-mounted displays. Many recent studies have suggested the possibility of using this technology to support the cognition of people with neurodegenerative disorders. However, most studies explored improvements in cognition rather than in independence and safety during the accomplishment of daily living activities. It is therefore crucial to document the possibility of using mixed reality to support the independence of older adults in their daily life. OBJECTIVE This study is part of a larger user-centered design study of a cognitive orthosis using pure mixed reality to support independence of people living with neurodegenerative disorders (NDs). The objectives were to explore: (1) What are the main difficulties encountered by older adults with NDs in their daily life to ensure that the pure mixed reality meets their needs; (2) What are the most effective interventions with this population in order to determine what types of assistance should be given by the pure mixed reality technology; (3) How should the pure mixed reality technology provide assistance to promote safety and independence at home; and (4) What are the main facilitators and barriers for the use of this technology. METHODS We conducted a descriptive qualitative study with 5 focus groups with experts of the disease and its functional impacts (n = 29) to gather information. Qualitative data from the focus groups was analyzed through an inductive thematic analysis. RESULTS The themes emerging from the analysis will provide clear guidelines to the development team prototyping a first version of a cognitive orthosis based on pure mixed reality. CONCLUSIONS The cognitive orthosis that will be developed in the light of this study will act as a proof of concept of the possibility of supporting people with neurodegenerative disorders using pure mixed reality.


2019 ◽  
Vol 9 (23) ◽  
pp. 5123 ◽  
Author(s):  
Diego Vaquero-Melchor ◽  
Ana M. Bernardos

Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this sense, the integration of haptics within AR may help to deliver an enriched experience, while facilitating the performance of specific actions, such as repositioning or resizing tasks, that are still dependent on the user’s skills. In this direction, this paper gathers the description of a flexible architecture designed to deploy haptically enabled AR applications both for mobile and wearable visualization devices. The haptic feedback may be generated through a variety of devices (e.g., wearable, graspable, or mid-air ones), and the architecture facilitates handling the specificity of each. For this reason, within the paper, it is discussed how to generate a haptic representation of a 3D digital object depending on the application and the target device. Additionally, the paper includes an analysis of practical, relevant issues that arise when setting up a system to work with specific devices like HMD (e.g., HoloLens) and mid-air haptic devices (e.g., Ultrahaptics), such as the alignment between the real world and the virtual one. The architecture applicability is demonstrated through the implementation of two applications: (a) Form Inspector and (b) Simon Game, built for HoloLens and iOS mobile phones for visualization and for UHK for mid-air haptics delivery. These applications have been used to explore with nine users the efficiency, meaningfulness, and usefulness of mid-air haptics for form perception, object resizing, and push interaction tasks. Results show that, although mobile interaction is preferred when this option is available, haptics turn out to be more meaningful in identifying shapes when compared to what users initially expect and in contributing to the execution of resizing tasks. Moreover, this preliminary user study reveals some design issues when working with haptic AR. For example, users may be expecting a tailored interface metaphor, not necessarily inspired in natural interaction. This has been the case of our proposal of virtual pressable buttons, built mimicking real buttons by using haptics, but differently interpreted by the study participants.


2014 ◽  
Vol 556-562 ◽  
pp. 3549-3552
Author(s):  
Lian Fen Huang ◽  
Qing Yue Chen ◽  
Jin Feng Lin ◽  
He Zhi Lin

The key of background subtraction which is widely used in moving object detecting is to set up and update the background model. This paper presents a block background subtraction method based on ViBe, using the spatial correlation and time continuity of the video sequence. Set up the video sequence background model firstly. Then, update the background model through block processing. Finally employ the difference between the current frame and background model to extract moving objects.


Sign in / Sign up

Export Citation Format

Share Document