The MobilAR Robot, Ubiquitous, Unobtrusive, Augmented Reality Device

Author(s):  
Emmanuel Bernier ◽  
Ryad Chellali ◽  
Indira Mouttapa Thouvenin

We present a new mobile Augmented Reality device that combines mobile robotics, human-robots interactions, and 3D modeling to augment users’ perception of their environment. The developed device, the MobilAR, provides minimally intrusive AR, where users do not need to wear any apparatus and no markers are used to align real and virtual entities. The mobilAR design is as follow: a projector is mounted on the end-effector of a robotic arm, itself mounted on a wheeled platform. The robotic arm allows to project undistorted content on any part of the environment such as walls, floor, ceiling and objects by using the right image transformation. The mobile base makes it possible to have the projector anywhere inside a building. The device uses self-localization and computer vision techniques to model the physical world and augment it. The mobilAR platform also encompasses a gesture recognition module for user interaction. As a proof-of-concept, we implemented a simple guided tour scenario of our laboratory where the MobilAR follows a user and projects contents on any surface. Results and extensions of this work are also discussed.

2009 ◽  
pp. 937-951
Author(s):  
Wayne Piekarski

This chapter presents a series of new augmented reality user interaction techniques to support the capture and creation of 3D geometry of large outdoor structures. Named construction at a distance, these techniques are based on the action at a distance concepts employed by other virtual environments researchers. These techniques address the problem of AR systems traditionally being consumers of information, rather than being used to create new content. By using information about the user’s physical presence along with hand and head gestures, AR systems can be used to capture and create the geometry of objects that are orders of magnitude larger than the user, with no prior information or assistance. While existing scanning techniques can only be used to capture existing physical objects, construction at a distance also allows the creation of new models that exist only in the mind of the user. Using a single AR interface, users can enter geometry and verify its accuracy in real-time. Construction at a distance is a collection of 3D modelling techniques based on the concept of AR working planes, landmark alignment, constructive solid geometry operations, and iterative refinement to form complex shapes. This chapter presents a number of different construction at a distance techniques, and are demonstrated with examples of real objects that have been modelled in the physical world.


Proceedings ◽  
2019 ◽  
Vol 42 (1) ◽  
pp. 50 ◽  
Author(s):  
Óscar Blanco-Novoa ◽  
Paula Fraga-Lamas ◽  
Miguel Vilar-Montesinos ◽  
Tiago Fernández-Caramés

The latest Augmented Reality (AR) and Mixed Reality (MR) systems are able to provide innovative methods for user interaction, but their full potential can only be achieved when they are able to exchange bidirectional information with the physical world that surround them, including the objects that belong to the Internet of Things (IoT). The problem is that elements like AR display devices or IoT sensors/actuators often use heterogeneous technologies that make it difficult to intercommunicate them in an easy way, thus requiring a high degree of specialization to carry out such a task. This paper presents an open-source framework that eases the integration of AR and IoT devices as well as the transfer of information among them, both in real time and in a dynamic way. The proposed framework makes use of widely used standard protocols and open-source tools like MQTT, HTTPS or Node-RED. In order to illustrate the operation of the framework, this paper presents the implementation of a practical home automation example: an AR/MR application for energy consumption monitoring that allows for using a pair of Microsoft HoloLens smart glasses to interact with smart power outlets.


Author(s):  
Wayne Piekarski

This chapter presents a series of new augmented reality user interaction techniques to support the capture and creation of 3D geometry of large outdoor structures. Named construction at a distance, these techniques are based on the action at a distance concepts employed by other virtual environments researchers. These techniques address the problem of AR systems traditionally being consumers of information, rather than being used to create new content. By using information about the user’s physical presence along with hand and head gestures, AR systems can be used to capture and create the geometry of objects that are orders of magnitude larger than the user, with no prior information or assistance. While existing scanning techniques can only be used to capture existing physical objects, construction at a distance also allows the creation of new models that exist only in the mind of the user. Using a single AR interface, users can enter geometry and verify its accuracy in real-time. Construction at a distance is a collection of 3D modelling techniques based on the concept of AR working planes, landmark alignment, constructive solid geometry operations, and iterative refinement to form complex shapes. This chapter presents a number of different construction at a distance techniques, and are demonstrated with examples of real objects that have been modelled in the physical world.


2013 ◽  
Vol 25 (3) ◽  
pp. 529-537 ◽  
Author(s):  
Sunao Hashimoto ◽  
◽  
Akihiko Ishida ◽  
Masahiko Inami ◽  
Takeo Igarashi ◽  
...  

General remote-control robots are manipulated by joysticks or game pads. These are difficult for inexperienced users, however, because the relationship between user input and the resulting robot movement may not be intuitive, e.g., tilting the joystick to the right to rotate the robot left. To solve this problem, we propose a touch-based interface called TouchMe for controlling a robot remotely from a third-person point of view. This interface allows the user to directly manipulate individual parts of a robot by touching it as seen by a camera. Our system provides intuitive operation allowing the user to use it with minimal training. In this paper, we describe TouchMe interaction and prototype implementation. We also introduce three types of movement for controlling the robot in response to user interaction and report on results of an empirical comparison of these methods.


Author(s):  
Goh Eg Su ◽  
◽  
Mohd Sharizal Sunar ◽  
Rino Andias ◽  
Ajune Wanis Ismail ◽  
...  

Author(s):  
Fabian Joeres ◽  
Tonia Mielke ◽  
Christian Hansen

Abstract Purpose Resection site repair during laparoscopic oncological surgery (e.g. laparoscopic partial nephrectomy) poses some unique challenges and opportunities for augmented reality (AR) navigation support. This work introduces an AR registration workflow that addresses the time pressure that is present during resection site repair. Methods We propose a two-step registration process: the AR content is registered as accurately as possible prior to the tumour resection (the primary registration). This accurate registration is used to apply artificial fiducials to the physical organ and the virtual model. After the resection, these fiducials can be used for rapid re-registration (the secondary registration). We tested this pipeline in a simulated-use study with $$N=18$$ N = 18 participants. We compared the registration accuracy and speed for our method and for landmark-based registration as a reference. Results Acquisition of and, thereby, registration with the artificial fiducials were significantly faster than the initial use of anatomical landmarks. Our method also had a trend to be more accurate in cases in which the primary registration was successful. The accuracy loss between the elaborate primary registration and the rapid secondary registration could be quantified with a mean target registration error increase of 2.35 mm. Conclusion This work introduces a registration pipeline for AR navigation support during laparoscopic resection site repair and provides a successful proof-of-concept evaluation thereof. Our results indicate that the concept is better suited than landmark-based registration during this phase, but further work is required to demonstrate clinical suitability and applicability.


2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


Nanophotonics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 3003-3010
Author(s):  
Jiacheng Shi ◽  
Wen Qiao ◽  
Jianyu Hua ◽  
Ruibin Li ◽  
Linsen Chen

AbstractGlasses-free augmented reality is of great interest by fusing virtual 3D images naturally with physical world without the aid of any wearable equipment. Here we propose a large-scale spatial multiplexing holographic see-through combiner for full-color 3D display. The pixelated metagratings with varied orientation and spatial frequency discretely reconstruct the propagating lightfield. The irradiance pattern of each view is tailored to form super Gaussian distribution with minimized crosstalk. What’s more, spatial multiplexing holographic combiner with customized aperture size is adopted for the white balance of virtually displayed full-color 3D scene. In a 32-inch prototype, 16 views form a smooth parallax with a viewing angle of 47°. A high transmission (>75%) over the entire visible spectrum range is achieved. We demonstrated that the displayed virtual 3D scene not only preserved natural motion parallax, but also mixed well with the natural objects. The potential applications of this study include education, communication, product design, advertisement, and head-up display.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


Sign in / Sign up

Export Citation Format

Share Document