scholarly journals SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from Monocular images

2021 ◽  
Author(s):  
Lei He ◽  
Jiwen Lu ◽  
Guanghui Wang ◽  
Shiyu Song ◽  
Jie Zhou
2018 ◽  
Vol 40 (7) ◽  
pp. 1741-1754 ◽  
Author(s):  
Yu Zhang ◽  
Xiaowu Chen ◽  
Jia Li ◽  
Chen Wang ◽  
Changqun Xia ◽  
...  

Author(s):  
Francesco Ragusa ◽  
Daniele Di Mauro ◽  
Alfio Palermo ◽  
Antonino Furnari ◽  
Giovanni Maria Farinella

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 337
Author(s):  
Hatem Ibrahem ◽  
Ahmed Salem ◽  
Hyun-Soo Kang

We propose Depth-to-Space Net (DTS-Net), an effective technique for semantic segmentation using the efficient sub-pixel convolutional neural network. This technique is inspired by depth-to-space (DTS) image reconstruction, which was originally used for image and video super-resolution tasks, combined with a mask enhancement filtration technique based on multi-label classification, namely, Nearest Label Filtration. In the proposed technique, we employ depth-wise separable convolution-based architectures. We propose both a deep network, that is, DTS-Net, and a lightweight network, DTS-Net-Lite, for real-time semantic segmentation; these networks employ Xception and MobileNetV2 architectures as the feature extractors, respectively. In addition, we explore the joint semantic segmentation and depth estimation task and demonstrate that the proposed technique can efficiently perform both tasks simultaneously, outperforming state-of-art (SOTA) methods. We train and evaluate the performance of the proposed method on the PASCAL VOC2012, NYUV2, and CITYSCAPES benchmarks. Hence, we obtain high mean intersection over union (mIOU) and mean pixel accuracy (Pix.acc.) values using simple and lightweight convolutional neural network architectures of the developed networks. Notably, the proposed method outperforms SOTA methods that depend on encoder–decoder architectures, although our implementation and computations are far simpler.


Sign in / Sign up

Export Citation Format

Share Document