3d object
Recently Published Documents


TOTAL DOCUMENTS

2472
(FIVE YEARS 768)

H-INDEX

65
(FIVE YEARS 19)

2022 ◽  
Vol 473 ◽  
pp. 158
Author(s):  
A.A.M. Muzahid ◽  
Wan Wanggen ◽  
Ferdous Sohel ◽  
Mohammed Bennamoun ◽  
Li Hou ◽  
...  

Cobot ◽  
2022 ◽  
Vol 1 ◽  
pp. 2
Author(s):  
Hao Peng ◽  
Guofeng Tong ◽  
Zheng Li ◽  
Yaqi Wang ◽  
Yuyuan Shao

Background: 3D object detection based on point clouds in road scenes has attracted much attention recently. The voxel-based methods voxelize the scene to regular grids, which can be processed with the advanced feature learning frameworks based on convolutional layers for semantic feature learning. The point-based methods can extract the geometric feature of the point due to the coordinate reservations. The combination of the two is effective for 3D object detection. However, the current methods use a voxel-based detection head with anchors for classification and localization. Although the preset anchors cover the entire scene, it is not suitable for detection tasks with larger scenes and multiple categories of objects, due to the limitation of the voxel size. Additionally, the misalignment between the predicted confidence and proposals in the Regions of the Interest (ROI) selection bring obstacles to 3D object detection. Methods: We investigate the combination of voxel-based methods and point-based methods for 3D object detection. Additionally, a voxel-to-point module that captures semantic and geometric features is proposed in the paper. The voxel-to-point module is conducive to the detection of small-size objects and avoids the presets of anchors in the inference stage. Moreover, a confidence adjustment module with the center-boundary-aware confidence attention is proposed to solve the misalignment between the predicted confidence and proposals in the regions of the interest selection. Results: The proposed method has achieved state-of-the-art results for 3D object detection in the  Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) object detection dataset. Actually, as of September 19, 2021, our method ranked 1st in the 3D and Bird Eyes View (BEV) detection of cyclists tagged with difficulty level ‘easy’, and ranked 2nd in the 3D detection of cyclists tagged with ‘moderate’. Conclusions: We propose an end-to-end two-stage 3D object detector with voxel-to-point module and confidence adjustment module.


Author(s):  
Yifei Tian ◽  
Wei Song ◽  
Long Chen ◽  
Simon Fong ◽  
Yunsick Sung ◽  
...  

2022 ◽  
pp. 103999
Author(s):  
Jing Li ◽  
Rui Li ◽  
Jiehao Li ◽  
Junzheng Wang ◽  
Qingbin Wu ◽  
...  

2021 ◽  
Vol 57 (2) ◽  
pp. 025006
Author(s):  
Sigit Ristanto ◽  
Waskito Nugroho ◽  
Eko Sulistya ◽  
Gede B Suparta

Abstract Measuring the 3D position at any time of a given object in real-time automatically and well documented to understand a physical phenomenon is essential. Exploring a stereo camera in developing 3D images is very intriguing since a 3D image perception generated by a stereo image may be reprojected back to generate a 3D object position at a specific time. This research aimed to develop a device and measure the 3D object position in real-time using a stereo camera. The device was constructed from a stereo camera, tripod, and a mini-PC. Calibration was carried out for position measurement in X, Y, and Z directions based on the disparity in the two images. Then, a simple 3D position measurement was carried out based on the calibration results. Also, whether the measurement was in real-time was justified. By applying template matching and triangulation algorithms on those two images, the object position in the 3D coordinate was calculated and recorded automatically. The disparity resolution characteristic of the stereo camera was obtained varied from 132 pixels to 58 pixels for an object distance to the camera from 30 cm to 70 cm. This setup could measure the 3D object position in real-time with an average delay time of less than 50 ms, using a notebook and a mini-PC. The 3D position measurement can be performed in real-time along with automatic documentation. Upon the stereo camera specifications used in this experiment, the maximum accuracy of the measurement in X, Y, and Z directions are ΔX = 0.6 cm, ΔY = 0.2 cm, and ΔZ = 0.8 cm at the measurement range of 30 cm–60 cm. This research is expected to provide new insights in the development of laboratory tools for learning physics, especially mechanics in schools and colleges.


Sign in / Sign up

Export Citation Format

Share Document