scholarly journals Object-based illumination transferring and rendering for applications of mixed reality

Author(s):  
Di Xu ◽  
Zhen Li ◽  
Qi Cao

AbstractIn applications of augmented reality or mixed reality, rendering virtual objects in real scenes with consistent illumination is crucial for realistic visualization experiences. Prior learning-based methods reported in the literature usually attempt to reconstruct complicated high dynamic range environment maps from limited input, and rely on a separate rendering pipeline to light up the virtual object. In this paper, an object-based illumination transferring and rendering algorithm is proposed to tackle this problem within a unified framework. Given a single low dynamic range image, instead of recovering lighting environment of the entire scene, the proposed algorithm directly infers the relit virtual object. It is achieved by transferring implicit illumination features which are extracted from its nearby planar surfaces. A generative adversarial network is adopted in the proposed algorithm for implicit illumination features extraction and transferring. Compared to previous works in the literature, the proposed algorithm is more robust, as it is able to efficiently recover spatially varying illumination in both indoor and outdoor scene environments. Experiments have been conducted. It is observed that notable experiment results and comparison outcomes have been obtained quantitatively and qualitatively by the proposed algorithm in different environments. It shows the effectiveness and robustness for realistic virtual object insertion and improved realism.

2021 ◽  
Vol 15 ◽  
Author(s):  
Lakshmi Annamalai ◽  
Anirban Chakraborty ◽  
Chetan Singh Thakur

Event-based cameras are bio-inspired novel sensors that asynchronously record changes in illumination in the form of events. This principle results in significant advantages over conventional cameras, such as low power utilization, high dynamic range, and no motion blur. Moreover, by design, such cameras encode only the relative motion between the scene and the sensor and not the static background to yield a very sparse data structure. In this paper, we leverage these advantages of an event camera toward a critical vision application—video anomaly detection. We propose an anomaly detection solution in the event domain with a conditional Generative Adversarial Network (cGAN) made up of sparse submanifold convolution layers. Video analytics tasks such as anomaly detection depend on the motion history at each pixel. To enable this, we also put forward a generic unsupervised deep learning solution to learn a novel memory surface known as Deep Learning (DL) memory surface. DL memory surface encodes the temporal information readily available from these sensors while retaining the sparsity of event data. Since there is no existing dataset for anomaly detection in the event domain, we also provide an anomaly detection event dataset with a set of anomalies. We empirically validate our anomaly detection architecture, composed of sparse convolutional layers, on this proposed and online dataset. Careful analysis of the anomaly detection network reveals that the presented method results in a massive reduction in computational complexity with good performance compared to previous state-of-the-art conventional frame-based anomaly detection networks.


Author(s):  
Anderson Stephanie

Latency, high temporal pixel density, and dynamic range are just a few of the benefits of event camera systems over conventional camera systems. Methods and algorithms cannot be applied directly because the output data of event camera systems are segments of synchronization events and experiences rather than precise pixel intensities. As a result, generating intensity photographs from occurrences for other functions is difficult. We use occurrence camera-based contingent deep convolutional connections to establish images and videos from a variable component of the occasion stream of data in this journal article. The system is designed to replicate visuals based on spatio-temporal intensity variations using bundles of spatial coordinates of occurrences as input data. The ability of event camera systems to produce High Dynamic Range (HDR) pictures even in exceptional lighting circumstances, as well as non-blurry pictures in rapid motion, is demonstrated. Furthermore, because event cameras have a transient response of about 1 s, the ability to generate very increased frame rate video content has been evidenced, conceivably up to 1 million arrays per second. The implementation of the proposed algorithms are compared to density images recorded onto a similar gridline in the image of events based on the application of accessible primary data obtained and synthesized datasets generated by the occurrence camera simulation model.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-10
Author(s):  
Xinglin Hou ◽  
Junchao Zhang ◽  
Peipei Zhou

Author(s):  
Pei-Ying Lu ◽  
Tz-Huan Huang ◽  
Meng-Sung Wu ◽  
Yi-Ting Cheng ◽  
Yung-Yu Chuang

2021 ◽  
Vol 23 ◽  
pp. 176-188
Author(s):  
Yifei Huang ◽  
Sheng Qiu ◽  
Changbo Wang ◽  
Chenhui Li

Sign in / Sign up

Export Citation Format

Share Document