texture boundaries
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 11)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christopher DiMattina ◽  
Curtis L. Baker

AbstractSegmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter–Rectify–Filter model positing two cascaded stages of filtering, which fits our data well, and explains observers' ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.


2021 ◽  
Vol 13 (9) ◽  
pp. 1673
Author(s):  
Wanpeng Xu ◽  
Ling Zou ◽  
Lingda Wu ◽  
Zhipeng Fu

For the task of monocular depth estimation, self-supervised learning supervises training by calculating the pixel difference between the target image and the warped reference image, obtaining results comparable to those with full supervision. However, the problematic pixels in low-texture regions are ignored, since most researchers think that no pixels violate the assumption of camera motion, taking stereo pairs as the input in self-supervised learning, which leads to the optimization problem in these regions. To tackle this problem, we perform photometric loss using the lowest-level feature maps instead and implement first- and second-order smoothing to the depth, ensuring consistent gradients ring optimization. Given the shortcomings of ResNet as the backbone, we propose a new depth estimation network architecture to improve edge location accuracy and obtain clear outline information even in smoothed low-texture boundaries. To acquire more stable and reliable quantitative evaluation results, we introce a virtual data set in the self-supervised task because these have dense depth maps corresponding to pixel by pixel. We achieve performance that exceeds that of the prior methods on both the Eigen Splits of the KITTI and VKITTI2 data sets taking stereo pairs as the input.


2021 ◽  
Author(s):  
Christopher DiMattina

ABSTRACTIn natural scenes, two adjacent surfaces may differ in mean luminance without any sharp change in luminance at their boundary, but rather due to different relative proportions of light and dark regions within each surface. We refer to such boundaries as luminance texture boundaries (LTBs), and in this study we investigate interactions between luminance texture boundaries and luminance step boundaries (LSBs) in a segmentation task. Using a simple masking paradigm, we find very little influence of LSB maskers on LTB segmentation thresholds. Similarly, we find only modest effects of LTB maskers on LSB thresholds. By contrast, each kind of boundary strongly masks targets of the same kind. Our data is consistent with the possibility that luminance texture boundaries may be segmented using different mechanisms than those used to segment luminance step boundaries. At the same time, our work also suggests that LTB segmentation is subject to influences from LSBs. We suggest that the relative robustness of LTB segmentation to interference from LSBs may serve the ecologically important role of providing robustness to changes in luminance caused by cast shadows, and we propose future experimental work to investigate this hypothesis.


2020 ◽  
Author(s):  
Christopher DiMattina ◽  
Curtis L. Baker

ABSTRACTSegmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter-Rectify-Filter (FRF) model positing two cascaded stages of filtering, which fits our data well, and explains observers’ ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.


2020 ◽  
Vol 30 (8) ◽  
pp. 1397-1409.e7 ◽  
Author(s):  
Chia-Hsuan Wang ◽  
Joseph D. Monaco ◽  
James J. Knierim

Sign in / Sign up

Export Citation Format

Share Document