Authoring and user interaction for the production of wave field synthesis content in an augmented reality system

Author(s):  
F. Melchior ◽  
T. Laubach ◽  
D. de Vries
Author(s):  
Goh Eg Su ◽  
◽  
Mohd Sharizal Sunar ◽  
Rino Andias ◽  
Ajune Wanis Ismail ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2018 ◽  
Vol 131 ◽  
pp. 220-229 ◽  
Author(s):  
Alessandro Lapini ◽  
Francesco Borchi ◽  
Monica Carfagni ◽  
Fabrizio Argenti

2013 ◽  
Vol 60 (9) ◽  
pp. 2636-2644 ◽  
Author(s):  
Hussam Al-Deen Ashab ◽  
Victoria A. Lessoway ◽  
Siavash Khallaghi ◽  
Alexis Cheng ◽  
Robert Rohling ◽  
...  

2009 ◽  
Vol 5 (4) ◽  
pp. 415-422 ◽  
Author(s):  
Ramesh Thoranaghatte ◽  
Jaime Garcia ◽  
Marco Caversaccio ◽  
Daniel Widmer ◽  
Miguel A. Gonzalez Ballester ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document