indoor scene
Recently Published Documents


TOTAL DOCUMENTS

263
(FIVE YEARS 101)

H-INDEX

20
(FIVE YEARS 5)

2022 ◽  
Vol 183 ◽  
pp. 470-481
Author(s):  
Ning Zhang ◽  
Francesco Nex ◽  
Norman Kerle ◽  
George Vosselman

2021 ◽  
pp. 1-10
Author(s):  
Rui Cao ◽  
Feng Jiang ◽  
Zhao Wu ◽  
Jia Ren

With the advancement of computer performance, deep learning is playing a vital role on hardware platforms. Indoor scene segmentation is a challenging deep learning task because indoor objects tend to obscure each other, and the dense layout increases the difficulty of segmentation. Still, current networks pursue accuracy improvement, sacrifice speed, and augment memory resource usage. To solve this problem, achieve a compromise between accuracy, speed, and model size. This paper proposes Multichannel Fusion Network (MFNet) for indoor scene segmentation, which mainly consists of Dense Residual Module(DRM) and Multi-scale Feature Extraction Module(MFEM). MFEM uses depthwise separable convolution to cut the number of parameters, matches different sizes of convolution kernels and dilation rates to achieve optimal receptive field; DRM fuses feature maps at several levels of resolution to optimize segmentation details. Experimental results on the NYU V2 dataset show that the proposed method achieves very competitive results compared with other advanced algorithms, with a segmentation speed of 38.47 fps, nearly twice that of Deeplab v3+, but only 1/5 of the number of parameters of Deeplab v3 + . Its segmentation results were close to those of advanced segmentation networks, making it beneficial for the real-time processing of images.


2021 ◽  
Author(s):  
Xinpeng Wang ◽  
Chandan Yeshwanth ◽  
Matthias Niesner
Keyword(s):  

2021 ◽  
Vol 13 (23) ◽  
pp. 4755
Author(s):  
Saishang Zhong ◽  
Mingqiang Guo ◽  
Ruina Lv ◽  
Jianguo Chen ◽  
Zhong Xie ◽  
...  

Rigid registration of 3D indoor scenes is a fundamental yet vital task in various fields that include remote sensing (e.g., 3D reconstruction of indoor scenes), photogrammetry measurement, geometry modeling, etc. Nevertheless, state-of-the-art registration approaches still have defects when dealing with low-quality indoor scene point clouds derived from consumer-grade RGB-D sensors. The major challenge is accurately extracting correspondences between a pair of low-quality point clouds when they contain considerable noise, outliers, or weak texture features. To solve the problem, we present a point cloud registration framework in view of RGB-D information. First, we propose a point normal filter for effectively removing noise and simultaneously maintaining sharp geometric features and smooth transition regions. Second, we design a correspondence extraction scheme based on a novel descriptor encoding textural and geometry information, which can robustly establish dense correspondences between a pair of low-quality point clouds. Finally, we propose a point-to-plane registration technology via a nonconvex regularizer, which can further diminish the influence of those false correspondences and produce an exact rigid transformation between a pair of point clouds. Compared to existing state-of-the-art techniques, intensive experimental results demonstrate that our registration framework is excellent visually and numerically, especially for dealing with low-quality indoor scenes.


2021 ◽  
Vol 2021 (29) ◽  
pp. 193-196
Author(s):  
  Anku ◽  
Susan P. Farnand

White balance is one of the key processes in a camera pipeline. Accuracy can be challenging when a scene is illuminated by multiple color light sources. We designed and built a studio which consisted of a controllable multiple LED light sources that produced a range of correlated color temperatures (CCTs) with high color fidelity that were used to illuminate test scenes. A two Alternative Forced Choice (2AFC) experiment was performed to evaluate the white balance appearance preference for images containing a model in the foreground and target objects in the background indoor scene. The foreground and background were lit by different combinations of cool to warm sources. The observers were asked to pick the one that was most aesthetically appealing to them. The results show that when the background is warm, the skin tones dominated observers' decisions and when the background is cool the preference shifts to scenes with same foreground and background CCT. The familiarity and unfamiliarity of objects in the background scene did not show a significant effect.


2021 ◽  
Author(s):  
Shibo Gong ◽  
Yansong Gong ◽  
Longfei Su ◽  
Jing Yuan ◽  
Fengchi Sun

Author(s):  
Yaning Wang ◽  
Weifeng Liu ◽  
Jianning Li ◽  
Zhangming Peng

Machines ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 230
Author(s):  
Huikai Liu ◽  
Gaorui Liu ◽  
Yue Zhang ◽  
Linjian Lei ◽  
Hui Xie ◽  
...  

This paper addresses the problem of instance-level 6DoF pose estimation from a single RGBD image in an indoor scene. Many recent works have shown that a two-stage network, which first detects the keypoints and then regresses the keypoints for 6d pose estimation, achieves remarkable performance. However, the previous methods concern little about channel-wise attention and the keypoints are not selected by comprehensive use of RGBD information, which limits the performance of the network. To enhance RGB feature representation ability, a modular Split-Attention block that enables attention across feature-map groups is proposed. In addition, by combining the Oriented FAST and Rotated BRIEF (ORB) keypoints and the Farthest Point Sample (FPS) algorithm, a simple but effective keypoint selection method named ORB-FPS is presented to avoid the keypoints appear on the non-salient regions. The proposed algorithm is tested on the Linemod and the YCB-Video dataset, the experimental results demonstrate that our method outperforms the current approaches, achieves ADD(S) accuracy of 94.5% on the Linemod dataset and 91.4% on the YCB-Video dataset.


2021 ◽  
Author(s):  
Liguang Zhou ◽  
Jun Cen ◽  
Xingchao Wang ◽  
Zhenglong Sun ◽  
Tin Lun Lam ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document