AR-PDA: A Personal Digital Assistant for VR/AR Content

Author(s):  
Juergen Fruend ◽  
Carsten Matysczok ◽  
Peter Ebbesmeyer ◽  
Joerg Maciej

This paper describes the development of an AR-based hard- and software system for a mobile digital assistant (AR-PDA). Target group for using this system is the large group of consumers. Here the AR-PDA uses AR technology to efficiently support users during their daily tasks. The technical realization of the system is based on 3rd generation video supported mobile phones. The user visualizes real objects with the AR-PDA. An integrated camera takes the pictures and the AR-PDA sends the video stream by mobile radiocommunication (e.g. UMTS) to the AR server. The server recognizes the objects by analyzing the image and establishes the relevant context-sensitive information, which is added to the video stream as multimedia elements (e.g. sound, video, text, images or virtual objects) and then sent back to the AR-PDA. The function validity is shown on the basis of close to practice application scenarios in the scope of household appliances.

2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2003 ◽  
Vol 12 (6) ◽  
pp. 615-628 ◽  
Author(s):  
Benjamin Lok ◽  
Samir Naik ◽  
Mary Whitton ◽  
Frederick P. Brooks

Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But does having every object being virtual inhibit the interactivity and level of immersion? If participants spend most of their time and cognitive load on learning and adapting to interacting with virtual objects, does this reduce the effectiveness of the VE? We conducted a study that investigated how handling real objects and self-avatar visual fidelity affects performance and sense of presence on a spatial cognitive manual task. We compared participants' performance of a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects. There was no signifi-cant difference in reported sense of presence, regardless of the self-avatar's visual fidelity or the presence of real objects.


2013 ◽  
Vol 12 (1) ◽  
pp. 30-43
Author(s):  
Bruno Eduardo Madeira ◽  
Luiz Velho

We describe a new architecture composed of software and hardware for displaying stereoscopic images over a horizontal surface. It works as a ``Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table that is, in fact, a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as virtual reality, games, teleconferencing, and distance learning. We present some interactive applications that we developed using this architecture.


2018 ◽  
pp. 777-793
Author(s):  
Srinivasa K. G. ◽  
Satvik Jagannath ◽  
Aakash Nidhi

Mobile devices are changing the way people live. Users have everything on their fingertips and to support them, there are scores of application which add to the usability and comfort. “Know your world better” is an Augmented Reality application developed for Android. This application helps the user to find friends and locate places in close proximity. In this paper we talk about an application that describes a method of augmenting Point of Interests (POI's) on a mobile device. User has to move his phone pointing in a direction of his choice and POI's if any are shown in real time. The user's interest with respect to the environment is inferred from speech or by selecting from the choices; this data is used for information retrieval from the cloud. The result of context-sensitive information retrieval is augmented onto the view of the mobile and provides speech output.


Author(s):  
Usman Naeem ◽  
Richard Anthony ◽  
Abdel-Rahman Tawil ◽  
Muhammad Awais Azam ◽  
David Preston

We live in a ubiquitous world where we are surrounded by context sensitive information and smart devices that are able to capture information about our surroundings unobtrusively. Making use of such rich information can enable recognition of activities conducted by elderly users, and in turn can allow the possibility of tracking any functional decline. This chapter highlights the current methods for unobtrusively recognising activities of daily living within a home environment for people with physical or cognitive disabilities. A main group for which this is important for are Alzheimer's patients. The chapter also bases the discussion of what makes a successful environment for carrying out accurate activity recognition, which is then followed by a proposed taxonomy of the key characteristics that are required for robust activity recognition within a smart environment, contextualised with real-life scenarios.


2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.


2015 ◽  
Vol 82 (5) ◽  
Author(s):  
Max-Gerd Retzlaff ◽  
Josua Stabenow ◽  
Jürgen Beyerer ◽  
Carsten Dachsbacher

AbstractWhen designing or improving systems for automated optical inspection (AOI), systematic evaluation is an important but costly necessity to achieve and ensure high quality. Computer graphics methods can be used to quickly create large virtual sets of samples of test objects and to simulate image acquisition setups. We use procedural modeling techniques to generate virtual objects with varying appearance and properties, mimicking real objects and sample sets. Physical simulation of rigid bodies is deployed to simulate the placement of virtual objects, and using physically-based rendering techniques we create synthetic images. These are used as input to an AOI system instead of physically acquired images. This enables the development, optimization, and evaluation of the image processing and classification steps of an AOI system independently of a physical realization. We demonstrate this approach for shards of glass, as sorting glass is one challenging practical application for AOI.


Sign in / Sign up

Export Citation Format

Share Document