A Collaborative Augmented Campus Based on Location-Aware Mobile Technology

2012 ◽  
Vol 10 (1) ◽  
pp. 55-73 ◽  
Author(s):  
A. De Lucia ◽  
R. Francese ◽  
I. Passero ◽  
G. Tortora

Mobile devices are changing the way people work and communicate. Most of the innovative devices offer the opportunity to integrate augmented reality in mobile applications, permitting the combination of the real world with virtual information. This feature can be particularly useful to enhance informal and formal didactic actions based on student collaboration. This paper describes a “collaborative campus”, originated in the physical architectural space, but exposing learning contents and social information structured as augmented virtual areas. ACCampus, a mobile augmented reality system, supporting the sharing of contextualized information is proposed. This system combines the world perceived by the phone camera with information concerning student location and community, enabling users to share multimedia information in location-based content areas. User localization is initially detected through QR codes. The successive positions of the user are determined using the mobile device sensors. Each augmented area is univocally spatially associated to a representative real wall area. Selective content sharing and collaboration are supported, enabling a user to distribute his/her augmented contents to specific users or groups. An evaluation of the proposed environment is also conducted, which considers that learning in collaborative environments is related to perceived member contribution, enjoinment, motivation, and student participation.

2008 ◽  
Vol 10 (4) ◽  
pp. 585-595 ◽  
Author(s):  
Yen-Hsu Chen ◽  
Tsorng-Lin Chia ◽  
Yeuan-Kuen Lee ◽  
Shih-Yu Huang ◽  
Ran-Zan Wang

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2013 ◽  
Vol 60 (9) ◽  
pp. 2636-2644 ◽  
Author(s):  
Hussam Al-Deen Ashab ◽  
Victoria A. Lessoway ◽  
Siavash Khallaghi ◽  
Alexis Cheng ◽  
Robert Rohling ◽  
...  

2009 ◽  
Vol 5 (4) ◽  
pp. 415-422 ◽  
Author(s):  
Ramesh Thoranaghatte ◽  
Jaime Garcia ◽  
Marco Caversaccio ◽  
Daniel Widmer ◽  
Miguel A. Gonzalez Ballester ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document