Investigation of Single Image Depth Prediction Under Different Lighting Conditions

2021 ◽  
Vol 14 (4) ◽  
pp. 1-17
Author(s):  
Aufaclav Zatu Kusuma Frisky ◽  
Agus Harjoko ◽  
Lukman Awaludin ◽  
Sebastian Zambanini ◽  
Robert Sablatnig

This article investigates the limitations of single image depth prediction (SIDP) under different lighting conditions. Besides that, it also offers a new approach to obtain the ideal condition for SIDP. To satisfy the data requirement, we exploit a photometric stereo dataset consisting of several images of an object under different light properties. In this work, we used a dataset of ancient Roman coins captured under 54 different lighting conditions to illustrate how the approach is affected by them. This dataset emulates many lighting variances with a different state of shading and reflectance common in the natural environment. The ground truth depth data in the dataset was obtained using the stereo photometric method and used as training data. We investigated the capabilities of three different state-of-the-art methods to reconstruct ancient Roman coins with different lighting scenarios. The first investigation compares the performance of a given network using previously trained data to check cross-domains performance. Second, the model is fine-tuned from pre-trained data and trained using 70% of the ancient Roman coin dataset. Both models are tested on the remaining 30% of the data. As evaluation metrics, root mean square error and visual inspection are used. As a result, the methods show different characteristic results based on the lighting condition of the test data. Overall, they perform better at 51° and 71° angles of light, so-called ideal condition afterward. However, they perform worse at 13° and 32° because of the high density of shadows. They also cannot reach the best performance at 82° caused by the reflection that appears on the image. Based on these findings, we propose a new approach to reduce the shadows and reflections on the image using intrinsic image decomposition to achieve a synthetic ideal condition. Based on the results of synthetic images, this approach can enhance the performance of SIDP. For some state-of-the-art methods, it also achieves better results than previous original RGB images.

Author(s):  
Jiaojiao LENG ◽  
Tongzhou ZHAO ◽  
Hui LI ◽  
Xiang LI

Author(s):  
Carl Toft ◽  
Daniyar Turmukhambetov ◽  
Torsten Sattler ◽  
Fredrik Kahl ◽  
Gabriel J. Brostow

Author(s):  
Yueying Kao ◽  
Weiming Li ◽  
Zairan Wang ◽  
Dongqing Zou ◽  
Ran He ◽  
...  

Automatic object viewpoint estimation from a single image is an important but challenging problem in machine intelligence community. Although impressive performance has been achieved, current state-of-the-art methods still have difficulty to deal with the visual ambiguity and structure ambiguity in real world images. To tackle these problems, a novel Appearance-and-Structure Fusion network, which we call it ASFnet that estimates viewpoint by fusing both appearance and structure information, is proposed in this paper. The structure information is encoded by precise semantic keypoints and can help address the visual ambiguity. Meanwhile, distinguishable appearance features contribute to overcoming the structure ambiguity. Our ASFnet integrates an appearance path and a structure path to an end-to-end network and allows deep features effectively share supervision from both the two complementary aspects. A convolutional layer is learned to fuse the two path results adaptively. To balance the influence from the two supervision sources, a piecewise loss weight strategy is employed during training. Experimentally, our proposed network outperforms state-of-the-art methods on a public PASCAL 3D+ dataset, which verifies the effectiveness of our method and further corroborates the above proposition.


Author(s):  
Biao Duan ◽  
Jing Li ◽  
Huaimin Chen ◽  
Yi Ru ◽  
Ze Zhang

This paper focus on the dehazing of a single image captured at nighttime. The current state-of-the-art nighttime dehazing approaches usually suffer from the color shift problem due to the fact that the assumptions enforced underdaytime cannot get applied to the nighttime image directly. The classical dehazing methods try to estimate the transmission mapand accurate light to dehaze a single image. The present basic idea is to firstly separate the light layer from the hazy image and thetransmission map can be computed afterwards. A new layer separation method is proposed to solve the non-globalatmospheric light problem. The present method on some real datasets to show its superior performance is validated.


Author(s):  
Xufeng Guo ◽  
Kien Nguyen ◽  
Simon Denman ◽  
Clinton Fookes ◽  
Sridha Sridharan

Sign in / Sign up

Export Citation Format

Share Document