scholarly journals Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments

Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 519
Author(s):  
Inma García-Pereira ◽  
Pablo Casanova-Salas ◽  
Jesús Gimeno ◽  
Pedro Morillo ◽  
Dirk Reiners

Augmented Reality (AR) annotations are a powerful way of communication when collaborators cannot be present at the same time in a given environment. However, this situation presents several challenges, for example: how to record the AR annotations for later consumption, how to align virtual and real world in unprepared environments or how to offer the annotations to users with different AR devices. In this paper we present a cross-device AR annotation method that allows users to create and display annotations asynchronously in environments without the need for prior preparation (AR markers, point cloud capture, etc.). This is achieved through an easy user-assisted calibration process and a data model that allows any type of annotation to be stored on any device. The experimental study carried out with 40 participants has verified our two hypotheses: we are able to visualize AR annotations in indoor environments without prior preparation regardless of the device used and the overall usability of the system is satisfactory.

2021 ◽  
Vol 18 (2) ◽  
pp. 1-16
Author(s):  
Holly C. Gagnon ◽  
Carlos Salas Rosales ◽  
Ryan Mileris ◽  
Jeanine K. Stefanucci ◽  
Sarah H. Creem-Regehr ◽  
...  

Augmented reality ( AR ) is important for training complex tasks, such as navigation, assembly, and medical procedures. The effectiveness of such training may depend on accurate spatial localization of AR objects in the environment. This article presents two experiments that test egocentric distance perception in augmented reality within and at the boundaries of action space (up to 35 m) in comparison with distance perception in a matched real-world ( RW ) environment. Using the Microsoft HoloLens, in Experiment 1, participants in two different RW settings judged egocentric distances (ranging from 10 to 35 m) to an AR avatar or a real person using a visual matching measure. Distances to augmented targets were underestimated compared to real targets in the two indoor, RW contexts. Experiment 2 aimed to generalize the results to an absolute distance measure using verbal reports in one of the indoor environments. Similar to Experiment 1, distances to augmented targets were underestimated compared to real targets. We discuss these findings with respect to the importance of methodologies that directly compare performance in real and mediated environments, as well as the inherent differences present in mediated environments that are “matched” to the real world.


Author(s):  
Taemin Lee ◽  
Changhun Jung ◽  
Kyungtaek Lee ◽  
Sanghyun Seo

AbstractAs augmented reality technologies develop, real-time interactions between objects present in the real world and virtual space are required. Generally, recognition and location estimation in augmented reality are carried out using tracking techniques, typically markers. However, using markers creates spatial constraints in simultaneous tracking of space and objects. Therefore, we propose a system that enables camera tracking in the real world and visualizes virtual visual information through the recognition and positioning of objects. We scanned the space using an RGB-D camera. A three-dimensional (3D) dense point cloud map is created using point clouds generated through video images. Among the generated point cloud information, objects are detected and retrieved based on the pre-learned data. Finally, using the predicted pose of the detected objects, other information may be augmented. Our system estimates object recognition and 3D pose based on simple camera information, enabling the viewing of virtual visual information based on object location.


2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2018 ◽  
Author(s):  
Kyle Plunkett

This manuscript provides two demonstrations of how Augmented Reality (AR), which is the projection of virtual information onto a real-world object, can be applied in the classroom and in the laboratory. Using only a smart phone and the free HP Reveal app, content rich AR notecards were prepared. The physical notecards are based on Organic Chemistry I reactions and show only a reagent and substrate. Upon interacting with the HP Reveal app, an AR video projection shows the product of the reaction as well as a real-time, hand-drawn curved-arrow mechanism of how the product is formed. Thirty AR notecards based on common Organic Chemistry I reactions and mechanisms are provided in the Supporting Information and are available for widespread use. In addition, the HP Reveal app was used to create AR video projections onto laboratory instrumentation so that a virtual expert can guide the user during the equipment setup and operation.


10.28945/2207 ◽  
2015 ◽  
Vol 10 ◽  
pp. 021-035 ◽  
Author(s):  
Yan Lu ◽  
Joseph T. Chao ◽  
Kevin R. Parker

This project shows a creative approach to the familiar scavenger hunt game. It involved the implementation of an iPhone application, HUNT, with Augmented Reality (AR) capability for the users to play the game as well as an administrative website that game organizers can use to create and make available games for users to play. Using the HUNT mobile app, users will first make a selection from a list of games, and they will then be shown a list of objects that they must seek. Once the user finds a correct object and scans it with the built-in camera on the smartphone, the application will attempt to verify if it is the correct object and then display associated multi-media AR content that may include images and videos overlaid on top of real world views. HUNT not only provides entertaining activities within an environment that players can explore, but the AR contents can serve as an educational tool. The project is designed to increase user involvement by using a familiar and enjoyable game as a basis and adding an educational dimension by incorporating AR technology and engaging and interactive multimedia to provide users with facts about the objects that they have located


Author(s):  
Kaori Kashimura ◽  
Takafumi Kawasaki Jr. ◽  
Nozomi Ikeya ◽  
Dave Randall

This chapter provides an ethnography of a complex scenario involving the construction of a power plant and, in so doing, tries to show the importance of a practice-based approach to the problem of technical and organizational change. The chapter reports on fieldwork conducted in a highly complex and tightly coupled environment: power plant construction. The ethnography describes work practices on three different sites and describes and analyses their interlocking dependencies, showing the difficulties encountered at each location and the way in which the delays that result cascade through the different sites. It goes on to describe some technological solutions that are associated with augmented reality and that are being designed in response to the insights gained from the fieldwork. The chapter also reflects more generally on the relationship between fieldwork and design in real-world contexts.


2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 642
Author(s):  
Luis Miguel González de Santos ◽  
Ernesto Frías Nores ◽  
Joaquín Martínez Sánchez ◽  
Higinio González Jorge

Nowadays, unmanned aerial vehicles (UAVs) are extensively used for multiple purposes, such as infrastructure inspections or surveillance. This paper presents a real-time path planning algorithm in indoor environments designed to perform contact inspection tasks using UAVs. The only input used by this algorithm is the point cloud of the building where the UAV is going to navigate. The algorithm is divided into two main parts. The first one is the pre-processing algorithm that processes the point cloud, segmenting it into rooms and discretizing each room. The second part is the path planning algorithm that has to be executed in real time. In this way, all the computational load is in the first step, which is pre-processed, making the path calculation algorithm faster. The method has been tested in different buildings, measuring the execution time for different paths calculations. As can be seen in the results section, the developed algorithm is able to calculate a new path in 8–9 milliseconds. The developed algorithm fulfils the execution time restrictions, and it has proven to be reliable for route calculation.


Sign in / Sign up

Export Citation Format

Share Document