Evaluation of human-computer interface for optical see-through augmented reality system

Author(s):  
Qianying Wang ◽  
Dayuan Yan ◽  
Dongdong Weng ◽  
Zeyong Qi
Author(s):  
Yi-Ting Tu ◽  
Shana Smith

In this paper, we apply augmented reality technology to make human computer interface more technically advanced and interesting. We present a real-time face tracking system for augmented reality, which will be used in electronic commerce, to help shoppers acquire a more direct interaction with the products they purchase. A real-time face tracking mechanism can enhance the realism of online shopping. In order to emphasize the convenience and practicability of our system, we used plug-in functions and Principal Component Analysis (PCA) to conduct real-time face tracking and Neural Networks (NN) to reduce training time and achieve valid recognition. Convenience and uniqueness are the other main parts of this system.


2000 ◽  
Author(s):  
Marius S. Vassiliou ◽  
Venkataraman Sundareswaran ◽  
S. Chen ◽  
Reinhold Behringer ◽  
Clement K. Tam ◽  
...  

Author(s):  
João Luís Antunes ◽  
Jose Bidarra ◽  
Mauro Figueiredo

Despite all the potential of augmented reality to improve the human-computer interface (HCI) and the user experience, it's still below the expected usage. The reason may be related to the fact that until recently the AR implementation was mostly marker-based or GPS-based to trigger additional content (video, 3D, or other) to the reality identified with the camera. The research in this paper is focused on AR marker-less solutions that allows sharing AR content between users across the Cloud, based on the anchor identification. With this technological paradigm shift, the potential for use of new functional environments and an unprecedented status of HCI enrichment is achieved. In addition to the operations related to the applications functionality, the door opens for media-art artists to create AR models that can be shared in a multiple user environment across the Cloud.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


Sign in / Sign up

Export Citation Format

Share Document