scholarly journals Rich intrinsic image decomposition of outdoor scenes from multiple views

Author(s):  
Pierre-Yves Laffont ◽  
Adrien Bousseau ◽  
George Drettakis
2014 ◽  
Vol 35 (5) ◽  
pp. 1190-1195
Author(s):  
Jian Bai ◽  
Xiang-chu Feng ◽  
Xu-dong Wang

1999 ◽  
Author(s):  
Martin A. Fischler ◽  
Robert C. Bolles
Keyword(s):  

Author(s):  
Hwanbok Mun ◽  
Gang-Joon Yoon ◽  
Jinjoo Song ◽  
Sang Min Yoon
Keyword(s):  

2021 ◽  
Vol 183 ◽  
pp. 107986
Author(s):  
Yun Liu ◽  
Anzhi Wang ◽  
Hao Zhou ◽  
Pengfei Jia

Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sandro L. Wiesmann ◽  
Laurent Caplette ◽  
Verena Willenbockel ◽  
Frédéric Gosselin ◽  
Melissa L.-H. Võ

AbstractHuman observers can quickly and accurately categorize scenes. This remarkable ability is related to the usage of information at different spatial frequencies (SFs) following a coarse-to-fine pattern: Low SFs, conveying coarse layout information, are thought to be used earlier than high SFs, representing more fine-grained information. Alternatives to this pattern have rarely been considered. Here, we probed all possible SF usage strategies randomly with high resolution in both the SF and time dimensions at two categorization levels. We show that correct basic-level categorizations of indoor scenes are linked to the sampling of relatively high SFs, whereas correct outdoor scene categorizations are predicted by an early use of high SFs and a later use of low SFs (fine-to-coarse pattern of SF usage). Superordinate-level categorizations (indoor vs. outdoor scenes) rely on lower SFs early on, followed by a shift to higher SFs and a subsequent shift back to lower SFs in late stages. In summary, our results show no consistent pattern of SF usage across tasks and only partially replicate the diagnostic SFs found in previous studies. We therefore propose that SF sampling strategies of observers differ with varying stimulus and task characteristics, thus favouring the notion of flexible SF usage.


Sign in / Sign up

Export Citation Format

Share Document