scholarly journals Natural Locomotion Interfaces – With a Little Bit of Magic!

2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Frank Steinicke

The mission of the Immersive Media Group (IMG) is to develop virtual locomotion user interfaces which allow humans to experience arbitrary 3D environments by means of the natural walking metaphor. Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of virtual reality (VR)-based interaction. However, the size of the virtual world often differs from the size of the tracked lab space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to address this issue by inconspicuously guiding the user on a physical path that may differ from the path the user perceives in the virtual world. For example, intentionally rotating the virtual camera to one side causes the user to unknowingly compensate by walking on a circular arc into the opposite direction. In the scope of the LOCUI project, which is funded by the German Research Foundation, we analyze how gains of locomotor speed, turns and curvatures can gradually alter the physical trajectory with respect to the path perceived in the virtual world without the users observing any discrepancy. Thus, users can be guided in order to avoid collisions with physical obstacles (e.g., lab walls) or they can be guided to arbitrary locations in the physical space. For example, if the user approaches a virtual object, she can be guided to a real proxy prop that is registered to and aligned with its virtual counterpart. Hence, the user can interact with a virtual object by touching the corresponding real-world proxy prop that provides haptic feedback. Based on the results of psychophysical experiments we plan With such a user interface it becomes possible to intuitively interact with any virtual object by touching registered real-world props.

2009 ◽  
Vol 628-629 ◽  
pp. 155-160 ◽  
Author(s):  
F.X. Yan ◽  
Z.X. Hou ◽  
Ding Hua Zhang ◽  
Wen Ke Kang

This paper describes an innovative free-form modeling system, Virtual Clay Modeling System (VCMS), in which users can directly manipulate the shape of a virtual object like a clay model in real world. With this system, some disadvantages of interaction with computer aided industry design (CAID) systems can be resolved. In order to enhance the immersion feelings and improve the controlling abilities to cut, paste, and compensate of VCMS, we use Spaceball 5000 and PHANTOM Desktop to assign the set of interaction tasks. During the process of realizing 6 degree-of-freedom (DOF) haptic feedback modeling control, we developed and accomplished the device interfaces with Open Inventor and Qt application framework. VCMS provides us a good immersion of allowing for effective modeling in a virtual world.


2020 ◽  
Author(s):  
Richard Schurz ◽  
Earl Bull

MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.


Author(s):  
Yulia Fatma ◽  
Armen Salim ◽  
Regiolina Hayami

Along with the development, the application can be used as a medium for learning. Augmented Reality is a technology that combines two-dimensional’s virtual objects and three-dimensional’s virtual objects into a real three-dimensional’s  then projecting the virtual objects in real time and simultaneously. The introduction of Solar System’s material, students are invited to get to know the planets which are directly encourage students to imagine circumtances in the Solar System. Explenational of planets form and how the planets make the revolution and rotation in books are considered less material’s explanation because its only display objects in 2D. In addition, students can not practice directly in preparing the layout of the planets in the Solar System. By applying Augmented Reality Technology, information’s learning delivery can be clarified, because in these applications are combined the real world and the virtual world. Not only display the material, the application also display images of planets in 3D animation’s objects with audio.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-17
Author(s):  
Finn Welsford-Ackroyd ◽  
Andrew Chalmers ◽  
Rafael Kuffner dos Anjos ◽  
Daniel Medeiros ◽  
Hyejin Kim ◽  
...  

In this paper, we present a system that allows a user with a head-mounted display (HMD) to communicate and collaborate with spectators outside of the headset. We evaluate its impact on task performance, immersion, and collaborative interaction. Our solution targets scenarios like live presentations or multi-user collaborative systems, where it is not convenient to develop a VR multiplayer experience and supply each user (and spectator) with an HMD. The spectator views the virtual world on a large-scale tiled video wall and is given the ability to control the orientation of their own virtual camera. This allows spectators to stay focused on the immersed user's point of view or freely look around the environment. To improve collaboration between users, we implemented a pointing system where a spectator can point at objects on the screen, which maps an indicator directly onto the objects in the virtual world. We conducted a user study to investigate the influence of rotational camera decoupling and pointing gestures in the context of HMD-immersed and non-immersed users utilizing a large-scale display. Our results indicate that camera decoupling and pointing positively impacts collaboration. A decoupled view is preferable in situations where both users need to indicate objects of interest in the scene, such as presentations and joint-task scenarios, as it requires a shared reference space. A coupled view, on the other hand, is preferable in synchronous interactions such as remote-assistant scenarios.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Shinpei Ito ◽  
Akinori Takahashi ◽  
Ruochen Si ◽  
Masatoshi Arikawa

<p><strong>Abstract.</strong> AR (Augmented Reality) could be realized as a basic and high-level function on latest smartphones with a reasonable price. AR enables users to experience consistent three-dimensional (3D) spaces co-existing with 3D real and virtual objects with sensing real 3D environments and reconstructing them in the virtual world through a camera. The accuracy of sensing real 3D environments using an AR function, that is, visual-inertial odometer, of a smartphone is extremely higher than one of a GPS receiver on it, and can be less than one centimeter. However, current common AR applications generally focus on “small” real 3D spaces, not large real 3D spaces. In other words, most of the current AR applications are not designed for uses based on a geographic coordinate system.</p><p>We proposed a global extension of the visual-inertial odometer with an image recognition function of geo-referenced image markers installed in real 3D spaces. Examples of geo-referenced image markers can be generated from analog guide boards existing in the real world. We tested this framework of a global extension of the visual-inertial odometer embedded in a smartphone on the first floor in the central library of Akita University. The geo-referenced image markers such as floor map boards and book categories sign boards were registered in a database of 3D geo-referenced real-world scene images. Our prototype system developed on a smartphone, that is, iPhone XS, Apple Inc., could first recognized a floor map board (Fig. 1), and could determine the 3D precise distance and direction of the smartphone from the central position of the floor map board in a local 3D coordinate space with the origin point as the central positon of the board. Then, the system could convert the relative precise position and the relative direction of the smartphone’s camera in a local coordinate space into a global precise location and orientation of it. A subject was walking the first floor in the building of the library with a world tracking function of the smartphone. The experimental result shows that the error of tracking a real 3D space of a global coordinate system was accumulated, but not bad. The accumulated error was only about 30 centimeters after the subject’s walking about 30 meters (Fig. 2). We are now planning to improve our prototype system in the accuracy of indoor navigation with calibrating the location and orientation of a smartphone based sequential recognitions of multiple referenced scene image markers which have already existed for a general user services of the library before developing this proposed new services. As the conclusion, the experiment’s result of testing our prototype system was impressive, we are now preparing a more practical high-precision LBS which enables a user to be navigated to the exact location of a book of a user’s interest in a bookshelf on a floor with AR and floor map interfaces.</p>


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


Author(s):  
Vivek Parashar

Augmented Reality is the technology using which we can integrate 3D virtual objects in our physical environment in real time. Augmented Reality helps us in bring the virtual world closer to our physical worlds and gives us the ability to interact with the surrounding. This paper will give you an idea that how Augmented Reality can transform Education Industry. In this paper we have used Augmented Reality to simplify the learning process and allow people to interact with 3D models with the help of gestures. This advancement in the technology is changing the way we interact with our surrounding, rather than watching videos or looking at a static diagram in your text book, Augmented Reality enables you to do more. So rather than putting someone in the animated world, the goal of augmented reality is to blend the virtual objects in the real world.


Sign in / Sign up

Export Citation Format

Share Document