How to Use Camera View Changing and Morphing Technology to Embody the Visual Representation of Metaphor in Augmented Reality

Author(s):  
Hong-Yi Pai ◽  
Chun-Ming Huang
2021 ◽  
Vol 7 ◽  
pp. e704
Author(s):  
Wei Ma ◽  
Shuai Zhang ◽  
Jincai Huang

Unlike traditional visualization methods, augmented reality (AR) inserts virtual objects and information directly into digital representations of the real world, which makes these objects and data more easily understood and interactive. The integration of AR and GIS is a promising way to display spatial information in context. However, most existing AR-GIS applications only provide local spatial information in a fixed location, which is exposed to a set of problems, limited legibility, information clutter and the incomplete spatial relationships. In addition, the indoor space structure is complex and GPS is unavailable, so that indoor AR systems are further impeded by the limited capacity of these systems to detect and display location and semantic information. To address this problem, the localization technique for tracking the camera positions was fused by Bluetooth low energy (BLE) and pedestrian dead reckoning (PDR). The multi-sensor fusion-based algorithm employs a particle filter. Based on the direction and position of the phone, the spatial information is automatically registered onto a live camera view. The proposed algorithm extracts and matches a bounding box of the indoor map to a real world scene. Finally, the indoor map and semantic information were rendered into the real world, based on the real-time computed spatial relationship between the indoor map and live camera view. Experimental results demonstrate that the average positioning error of our approach is 1.47 m, and 80% of proposed method error is within approximately 1.8 m. The positioning result can effectively support that AR and indoor map fusion technique links rich indoor spatial information to real world scenes. The method is not only suitable for traditional tasks related to indoor navigation, but it is also promising method for crowdsourcing data collection and indoor map reconstruction.


2021 ◽  
Author(s):  
Alex Ufkes

Augmented Reality (AR) combines a live camera view of a real world environment with computer-generated virtual content. Alignment of these viewpoints is done by recognizing artificial fiducial markers, or, more recently, natural features already present in the environment. This is known as Marker-based and Markerless AR respectively. We present a markerless AR system that is not limited to artificial markers, but is capable of rendering augmentations over user-selected textured surfaces, or ‘maps’. The system stores and differentiates between multiple maps, all created online. Once recognized, maps are tracked using a hybrid algorithm based on feature matching and inlier tracking. With the increasing ubiquity and capability of mobile devices, we believe it is possible to perform robust, markerless AR on current generation tablets and smartphones. The proposed system is shown to operate in real-time on mobile devices, and generate robust augmentations under a wide range of map compositions and viewing conditions.


2021 ◽  
Author(s):  
Alex Ufkes

Augmented Reality (AR) combines a live camera view of a real world environment with computer-generated virtual content. Alignment of these viewpoints is done by recognizing artificial fiducial markers, or, more recently, natural features already present in the environment. This is known as Marker-based and Markerless AR respectively. We present a markerless AR system that is not limited to artificial markers, but is capable of rendering augmentations over user-selected textured surfaces, or ‘maps’. The system stores and differentiates between multiple maps, all created online. Once recognized, maps are tracked using a hybrid algorithm based on feature matching and inlier tracking. With the increasing ubiquity and capability of mobile devices, we believe it is possible to perform robust, markerless AR on current generation tablets and smartphones. The proposed system is shown to operate in real-time on mobile devices, and generate robust augmentations under a wide range of map compositions and viewing conditions.


In the recent days, there are rapid advancements in the web 2.0 contents (texts, images and videos) and in the computational complexity. The next generation in computing is predictably in the mobile arena. A lot of applications are being developed to cater the needs of mobile device users. This research is based on the question: How can we benefit from the rapidly proliferating web 2.0 contents in the mobile device with staggering technological developments? This paper presents a surprisingly impressive novel dynamic multimedia encyclopedia, multimedia, that augments reality on the mobile device. This system supersedes the traditional texts or maps retrieved. A group of estimation and sensors to track objects and keep information on them used. An augmented reality based multimedia encyclopedia on mobile device which is dynamic is presented. It is dynamic and mines resources on the web. We demonstrate how this device can be utilized to bring relevant information to users in a much more enriched manner than traditional texts. ARM is dynamic and is distinctive in 3 ways. It fetches multimedia contents for users through automated intuition and dynamically updates content from the web. It uses recent multimedia content to produce answers which are versatile. New mediums like videos & languages can be incorporated into the framework. This system takes advantage of the mobile camera, location and orientation sensors to augment reality. Live camera view and retrieve dynamic data. A Combination of algorithms is used to analyze and demonstrate our ARM.


2020 ◽  
Vol 11 (12) ◽  
pp. 5809-5824
Author(s):  
Marc Schickler ◽  
Manfred Reichert ◽  
Philip Geiger ◽  
Jens Winkler ◽  
Thomas Funk ◽  
...  

AbstractMobile applications have garnered a lot of attention in the last years. The computational capabilities of mobile devices are the mainstay to develop completely new application types. The provision of augmented reality experiences on mobile devices paves one alley in this field. For example, in the automotive domain, augmented reality applications are used to experience, inter alia, the interior of a car by moving a mobile device around. The device’s camera then detects interior parts and shows additional information to the customer within the camera view. Another application type that is increasingly utilized is related to the combination of serious games with mobile augmented reality functions. Although the latter combination is promising for many scenarios, technically, it is a complex endeavor. In the AREA (Augmented Reality Engine Application) project, a kernel was implemented that enables location-based mobile augmented reality applications. Importantly, this kernel provides a flexible architecture that fosters the development of individual location-based mobile augmented reality applications. The work at hand shows the flexibility of AREA based on a developed serious game. Furthermore, the algorithm framework and major features of it are presented. As the conclusion of this paper, it is shown that mobile augmented reality applications require high development efforts. Therefore, flexible frameworks like AREA are crucial to develop respective applications in a reasonable time.


Author(s):  
Serhii Tsyrulnyk

Modern teaching methods are implemented with the use of information technologies that facilitate and accelerate the transfer of knowledge to students, increase student motivation and increase the level of assimilation of information through diversity and interactivity of its visual representation. Today, there are many approaches to using augmented reality technology in education. Such learning system can be divided into three basic groups: visualization of 3D images for a visual representation of training material; recognition and marking of real objects, which are oriented in space; the interaction of the virtual object is constructed in the computer (smartphone) with the person in real-time. The article describes the concept of augmented reality and focuses on the use of augmented reality technology in the educational process of training of students of technical specialties. Given the relevance and benefits of using this technology in the educational process. Analyzed AR apps «AR Circuits 4D» and «Electricity AR» with a view to their adaptation to the educational process of training specialists in electronics. The algorithm of creating apps augmented reality platform Vuforia and Unity3D software. Requirements for image targets, 3D models and their format, software Unity 3D. Provides practical experience in creating educational AR applications on the platform Vuforia and Unity3D software that renders 3D images of the DC-AC converter for illustration of educational material. 3D model converter created in the CAD system Proteus Design Suite, which conventuals Autodesk FBX Converter to format .fbx. The AR app is downloaded to the Android device by any known method.


ASHA Leader ◽  
2013 ◽  
Vol 18 (9) ◽  
pp. 14-14 ◽  
Keyword(s):  

Amp Up Your Treatment With Augmented Reality


2003 ◽  
Vol 15 (2) ◽  
pp. 141-156 ◽  
Author(s):  
eve Coste-Maniere ◽  
Louai Adhami ◽  
Fabien Mourgues ◽  
Alain Carpentier

Sign in / Sign up

Export Citation Format

Share Document