Augmented reality (AR) applications have gained much research and industry attention. Moreover, the mobile counterpart—mobile augmented reality (MAR) is one of the most explosive growth areas for AR applications in the mobile environment (e.g., smartphones). The technical improvements in the hardware of smartphones, tablets, and smart-glasses provide an advantage for the wide use of mobile AR in the real world and experience these AR applications anywhere. However, the mobile nature of MAR applications can limit users’ interaction capabilities, such as input and haptic feedback. In this survey, we analyze current research issues in the area of human-computer interaction for haptic technologies in MAR scenarios. The survey first presents human sensing capabilities and their applicability in AR applications. We classify haptic devices into two groups according to the triggered sense:
: touch, active surfaces, and mid-air;
: manipulandum, grasp, and exoskeleton. Due to MAR applications’ mobile capabilities, we mainly focus our study on wearable haptic devices for each category and their AR possibilities. To conclude, we discuss the future paths that haptic feedback should follow for MAR applications and their challenges.
Interaction design for Augmented Reality (AR) is gaining attention from both academia and industry. This survey discusses 260 articles (68.8% of articles published between 2015–2019) to review the field of human interaction in connected cities with emphasis on augmented reality-driven interaction. We provide an overview of Human-City Interaction and related technological approaches, followed by reviewing the latest trends of information visualization, constrained interfaces, and embodied interaction for AR headsets. We highlight under-explored issues in interface design and input techniques that warrant further research and conjecture that AR with complementary Conversational User Interfaces (CUIs) is a crucial enabler for ubiquitous interaction with immersive systems in smart cities. Our work helps researchers understand the current potential and future needs of AR in Human-City Interaction.
Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides
distortion-tolerant image recognition
for mobile AR with
imperceptible system latency
. CollabAR incorporates both
image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms.