DOBNET: Dynamic Object Boundary-Refinement Network for Real-Time Instance Segmentation

Author(s):  
Boxiang Zhang ◽  
Yuanyuan Guan ◽  
Hongru Liu ◽  
Wenhui Li ◽  
Ying Wang
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 275
Author(s):  
Ruben Panero Martinez ◽  
Ionut Schiopu ◽  
Bruno Cornelis ◽  
Adrian Munteanu

The paper proposes a novel instance segmentation method for traffic videos devised for deployment on real-time embedded devices. A novel neural network architecture is proposed using a multi-resolution feature extraction backbone and improved network designs for the object detection and instance segmentation branches. A novel post-processing method is introduced to ensure a reduced rate of false detection by evaluating the quality of the output masks. An improved network training procedure is proposed based on a novel label assignment algorithm. An ablation study on speed-vs.-performance trade-off further modifies the two branches and replaces the conventional ResNet-based performance-oriented backbone with a lightweight speed-oriented design. The proposed architectural variations achieve real-time performance when deployed on embedded devices. The experimental results demonstrate that the proposed instance segmentation method for traffic videos outperforms the you only look at coefficients algorithm, the state-of-the-art real-time instance segmentation method. The proposed architecture achieves qualitative results with 31.57 average precision on the COCO dataset, while its speed-oriented variations achieve speeds of up to 66.25 frames per second on the Jetson AGX Xavier module.


Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


Author(s):  
B. Ravi Kiran ◽  
Luis Roldão ◽  
Beñat Irastorza ◽  
Renzo Verastegui ◽  
Sebastian Süss ◽  
...  

2013 ◽  
Vol 48 (1) ◽  
pp. 33-45 ◽  
Author(s):  
Jinwook Oh ◽  
Gyeonghoon Kim ◽  
Junyoung Park ◽  
Injoon Hong ◽  
Seungjin Lee ◽  
...  

2021 ◽  
Author(s):  
Haotian Liu ◽  
Rafael A. Rivera Soto ◽  
Fanyi Xiao ◽  
Yong Jae Lee

2007 ◽  
Author(s):  
Dongkyun Kim ◽  
Jung Uk Cho ◽  
Thien Cong Pham ◽  
Jae Wook Jeon

Sign in / Sign up

Export Citation Format

Share Document