scholarly journals Physics-based Shadow Image Decomposition for Shadow Removal

Author(s):  
Hieu Le ◽  
Dimitris Samaras
2019 ◽  
Vol 35 (6-8) ◽  
pp. 1091-1104 ◽  
Author(s):  
Ling Zhang ◽  
Qingan Yan ◽  
Yao Zhu ◽  
Xiaolong Zhang ◽  
Chunxia Xiao

2015 ◽  
Vol 734 ◽  
pp. 568-571
Author(s):  
Qi Chen ◽  
Xing Ben Yang ◽  
Lei Jin

In this paper we present a novel method for shadow detection and removal in single images. Instead of a single Gaussian distribution in the shadow detection stage, it is assumed as a Gaussian Mixture Shadow Model (GMSM) which parameters are estimated by model learning. In addition to considering individual regions separately, we predict relative illumination conditions between the shadow regions and non-shadow regions. The shadow image is recovered by relighting each pixel based on our paired lighting model. The experiment results confirm the effectiveness of our proposed method.


2014 ◽  
Vol 35 (5) ◽  
pp. 1190-1195
Author(s):  
Jian Bai ◽  
Xiang-chu Feng ◽  
Xu-dong Wang

Author(s):  
Hwanbok Mun ◽  
Gang-Joon Yoon ◽  
Jinjoo Song ◽  
Sang Min Yoon
Keyword(s):  

2021 ◽  
Vol 183 ◽  
pp. 107986
Author(s):  
Yun Liu ◽  
Anzhi Wang ◽  
Hao Zhou ◽  
Pengfei Jia

Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


Sign in / Sign up

Export Citation Format

Share Document