Segmentation of Illuminated Areas of Light Using CNN and Large-Scale RGB+D Dataset for Augmented and Mixed Reality Systems
This work is devoted to the problem of restoring realistic rendering for augmented and mixed reality systems. Finding the light sources and restoring the correct distribution of scene brightness is one of the key parameters that allows to solve the problem of correct interaction between the virtual and real worlds. With the advent of such datasets as, "LARGE-SCALE RGB + D," it became possible to train neural networks to recognize the depth map of images, which is a key requirement for working with the environment in real time. Additionally, in this work, convolutional neural networks were trained on the synthesized dataset with realistic lighting. The results of the proposed methods are presented, the accuracy of restoring the position of the light sources is estimated, and the visual difference between the image of the scene with the original light sources and the same scene. The speed allows it to be used in real-time AR/VR systems.