Consistent real-time lighting for virtual objects in augmented reality

Author(s):  
Ryan Christopher Yeoh ◽  
Steven ZhiYing Zhou
Author(s):  
Yulia Fatma ◽  
Armen Salim ◽  
Regiolina Hayami

Along with the development, the application can be used as a medium for learning. Augmented Reality is a technology that combines two-dimensional’s virtual objects and three-dimensional’s virtual objects into a real three-dimensional’s  then projecting the virtual objects in real time and simultaneously. The introduction of Solar System’s material, students are invited to get to know the planets which are directly encourage students to imagine circumtances in the Solar System. Explenational of planets form and how the planets make the revolution and rotation in books are considered less material’s explanation because its only display objects in 2D. In addition, students can not practice directly in preparing the layout of the planets in the Solar System. By applying Augmented Reality Technology, information’s learning delivery can be clarified, because in these applications are combined the real world and the virtual world. Not only display the material, the application also display images of planets in 3D animation’s objects with audio.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


2015 ◽  
Vol 75 (4) ◽  
Author(s):  
Ajune Wanis Ismail ◽  
Mark Bilinghust ◽  
Mohd Shahrizal Sunar

In this paper, we describe a new tracking approach for object handling in Augmented Reality (AR). Our approach improves the standard vision-based tracking system during marker extraction and its detection stage. It transforms a unique tracking pattern into set of vertices which are able to perform interaction such as translate, rotate, and copy. This is based on arobust real-time computer vision algorithm that tracks a paddle that a person uses for input. A paddle pose pattern is constructed in a one-time calibration process and through vertex-based calculation of the camera pose relative to the paddle we can show 3D graphics on top of it. This allows the user to look at virtual objects from different viewing angles in the AR interface and perform 3D object manipulation. This approach was implemented using marker-based tracking to improve the tracking in term of the accuracy and robustness in manipulating 3D objects in real-time. We demonstrate our improved tracking system with a sample Tangible AR application, and describe how the system could be improved in the future.


2021 ◽  
Author(s):  
Ezgi Pelin Yildiz

Augmented reality is defined as the technology in which virtual objects are blended with the real world and also interact with each other. Although augmented reality applications are used in many areas, the most important of these areas is the field of education. AR technology allows the combination of real objects and virtual information in order to increase students’ interaction with physical environments and facilitate their learning. Developing technology enables students to learn complex topics in a fun and easy way through virtual reality devices. Students interact with objects in the virtual environment and can learn more about it. For example; by organizing digital tours to a museum or zoo in a completely different country, lessons can be taught in the company of a teacher as if they were there at that moment. In the light of all these, this study is a compilation study. In this context, augmented reality technologies were introduced and attention was drawn to their use in different fields of education with their examples. As a suggestion at the end of the study, it was emphasized that the prepared sections should be carefully read by the educators and put into practice in their lessons. In addition it was also pointed out that it should be preferred in order to communicate effectively with students by interacting in real time, especially during the pandemic process.


2017 ◽  
Vol 17 (02) ◽  
pp. e20 ◽  
Author(s):  
Kevin E. Soulier ◽  
Matías Nicolás Selzer ◽  
Martín Leonardo Larrea

In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.


2018 ◽  
Vol 8 (10) ◽  
pp. 1860 ◽  
Author(s):  
Joolekha Joolee ◽  
Md Uddin ◽  
Jawad Khan ◽  
Taeyeon Kim ◽  
Young-Koo Lee

Mobile Augmented Reality merges the virtual objects with real world on mobile devices, while video retrieval brings out the similar looking videos from the large-scale video dataset. Since mobile augmented reality application demands the real-time interaction and operation, we need to process and interact in real-time. Furthermore, augmented reality based virtual objects can be poorly textured. In order to resolve the above mentioned issues, in this research, we propose a novel, fast and robust approach for retrieving videos on the mobile augmented reality environment using an image and video queries. In the beginning, Top-K key-frames are extracted from the videos which significantly increases the efficiency. Secondly, we introduce a novel frame based feature extraction method, namely Pyramid Ternary Histogram of Oriented Gradient (PTHOG) to extract the shape feature from the virtual objects in an effective and efficient manner. Thirdly, we utilize the Double-Bit Quantization (DBQ) based hashing to accomplish the nearest neighbor search efficiently, which produce the candidate list of videos. Lastly, the similarity measure is performed to re-rank the videos which are obtained from the candidate list. An extensive experimental analysis is performed in order to verify our claims.


Author(s):  
Hiroyuki Mitsuhara ◽  
Keisuke Iguchi ◽  
Masami Shishibori

Disaster education focusing on how we should take immediate actions after disasters strike is essential to protect our lives. However, children find it difficult to understand such disaster education. Instead of disaster education to children, adults should properly instruct them to take immediate actions in the event of a disaster. We refer to such adults as Immediate-Action Commanders (IACers) and attach importance to technology-enhanced IACer training programs with high situational and audio-visual realities. To realize such programs, we focused on digital game, augmented reality (AR) and head-mounted displays (HMDs). We prototyped three AR systems that superimpose interactive virtual objects onto HMDs’ real-time vision or a trainee’s actual view based on interactive fictional scenarios. In addition, the systems are designed to realize voice-based interactions between the virtual objects (i.e., virtual children) and the trainee. According to a brief comparative survey, the AR system equipped with a smartphone-based binocular opaque HMD (Google Cardboard) has the most promising practical system for technology-enhanced IACer training programs.


Author(s):  
Yuzhu Lu ◽  
Shana Smith

In this paper, we present a prototype system, which uses CAVE-based virtual reality to enhance immersion in an augmented reality environment. The system integrates virtual objects into a real scene captured by a set of stereo remote cameras. We also present a graphic processing unit (GPU)-based method for computing occlusion between real and virtual objects in real time. The method uses information from the captured stereo images to determine depth of objects in the real scene. Results and performance comparisons show that the GPU-based method is much faster than prior CPU-based methods.


Author(s):  
Pranav Jain ◽  
Conrad Tucker

Abstract In this paper, a mobile-based augmented reality (AR) method is presented that is capable of accurate occlusion between digital and real-world objects in real-time. AR occlusion is the process of hiding or showing virtual objects behind physical ones. Existing approaches that address occlusion in AR applications typically require the use of markers or depth sensors, coupled with compute machines (e.g., laptop or desktop). Furthermore, real-world environments are cluttered and contain motion artifacts that result in occlusion errors and improperly rendered virtual objects, relative to the real world environment. These occlusion errors can lead users to have an incorrect perception of the environment around them while using an AR application, namely not knowing a real-world object is present. Moving the technology to mobile-based AR environments is necessary to reduce the cost and complexity of these technologies. This paper presents a mobile-based AR method that brings real and virtual objects into a similar coordinate system so that virtual objects do not obscure nearby real-world objects in an AR environment. This method captures and processes visual data in real-time, allowing the method to be used in a variety of non-static environments and scenarios. The results of the case study show that the method has the potential to reduce compute complexity, maintain high frame rates to run in real-time, and maintain occlusion efficacy.


Sign in / Sign up

Export Citation Format

Share Document