scene parsing
Recently Published Documents


TOTAL DOCUMENTS

175
(FIVE YEARS 88)

H-INDEX

18
(FIVE YEARS 6)

Author(s):  
Khadijah Khadijah ◽  
Sukmawati Nur Endah ◽  
Retno Kusumaningrum ◽  
Rismiyati Rismiyati ◽  
Priyo Sidik Sasongko ◽  
...  

2021 ◽  
pp. 147-160
Author(s):  
Akhilesh Vikas Kakade ◽  
S Rajkumar ◽  
K Suganthi ◽  
L Ramanathan

2021 ◽  
Author(s):  
Yanran Wu ◽  
Xiangtai Li ◽  
Chen Shi ◽  
Yunhai Tong ◽  
Yang Hua ◽  
...  
Keyword(s):  

2021 ◽  
Vol 18 (5) ◽  
pp. 172988142110486
Author(s):  
Botao Zhang ◽  
Tao Hong ◽  
Rong Xiong ◽  
Sergey A Chepinskiy

Terrain segmentation is of great significance to robot navigation, cognition, and map building. However, the existing vision-based methods are challenging to meet the high-accuracy and real-time performance. A terrain segmentation method with a novel lightweight pyramid scene parsing mobile network is proposed for terrain segmentation in robot navigation. It combines the feature extraction structure of MobileNet and the encoding path of pyramid scene parsing network. The depthwise separable convolution, the spatial pyramid pooling, and the feature fusion are employed to reduce the onboard computing time of pyramid scene parsing mobile network. A unique data set called Hangzhou Dianzi University Terrain Dataset is constructed for terrain segmentation, which contains more than 4000 images from 10 different scenes. The data set was collected from a robot’s perspective to make it more suitable for robotic applications. Experimental results show that the proposed method has high-accuracy and real-time performance on the onboard computer. Moreover, its real-time performance is better than most state-of-the-art methods for terrain segmentation.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Pai Peng ◽  
Keke Geng ◽  
Guodong Yin ◽  
Yanbo Lu ◽  
Weichao Zhuang ◽  
...  

AbstractCurrent works of environmental perception for connected autonomous electrified vehicles (CAEVs) mainly focus on the object detection task in good weather and illumination conditions, they often perform poorly in adverse scenarios and have a vague scene parsing ability. This paper aims to develop an end-to-end sharpening mixture of experts (SMoE) fusion framework to improve the robustness and accuracy of the perception systems for CAEVs in complex illumination and weather conditions. Three original contributions make our work distinctive from the existing relevant literature. The Complex KITTI dataset is introduced which consists of 7481 pairs of modified KITTI RGB images and the generated LiDAR dense depth maps, and this dataset is fine annotated in instance-level with the proposed semi-automatic annotation method. The SMoE fusion approach is devised to adaptively learn the robust kernels from complementary modalities. Comprehensive comparative experiments are implemented, and the results show that the proposed SMoE framework yield significant improvements over the other fusion techniques in adverse environmental conditions. This research proposes a SMoE fusion framework to improve the scene parsing ability of the perception systems for CAEVs in adverse conditions.


Author(s):  
Qi Wang ◽  
Yuanshuai Wang ◽  
Yuan Zhou ◽  
Jing Wang ◽  
Wuming Jiang ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Jiaxu Miao ◽  
Yunchao Wei ◽  
Yu Wu ◽  
Chen Liang ◽  
Guangrui Li ◽  
...  

2021 ◽  
Author(s):  
Raghava Modhugu ◽  
Harish Rithish Sethuram ◽  
Manmohan Chandraker ◽  
C.V. Jawahar
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document