PV-EncoNet: Fast Object Detection Based on Colored Point Cloud

Author(s):  
Zhenchao Ouyang ◽  
Xiaoyun Dong ◽  
Jiahe Cui ◽  
Jianwei Niu ◽  
Mohsen Guizani
2021 ◽  
Author(s):  
Weiqian Guo ◽  
Rendong Ying ◽  
Peilin Liu ◽  
Weihang Wang

Author(s):  
Xiaobin Xu ◽  
Lei Zhang ◽  
Jian Yang ◽  
Chenfei Cao ◽  
Zhiying Tan ◽  
...  

Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


Author(s):  
Toivo Ylinampa ◽  
Hannu Saarenmaa

New innovations are needed to speed up digitisation of insect collections. More than one half of all specimens in scientific collections are pinned insects. In Europe this means 500-1,000 million such specimens. Today’s fastest mass-digitisation (i.e., imaging) systems for pinned insects can achieve circa 70 specimens/hour and 500/day by one operator (Tegelberg et al. 2014, Tegelberg et al. 2017). This is in contrast of the 5,000/day rate of the state-of-the-art mass-digitisation systems for herbarium sheets (Oever and Gofferje 2012). The slowness of imaging pinned insects follows from the fact that they are essentially 3D objects. Although butterflies, moths, dragonflies and similar large-winged insects can be prepared (spread) as 2D objects, the fact that the labels are pinned under the insect specimen makes even these samples 3D. In imaging, the labels are often removed manually, which slows down the imaging process. If the need for manual handling of the labels can be skipped, the imaging speed can easily multiplied. ENTODIG-3D (Fig. 1) is an automated camera system, which takes pictures of insect collection boxes (units and drawers) and digitizes them, minimizing time-consuming manual handling of specimens. “Units” are small boxes or trays contained in drawers of collection cabinets, and are being used in most major insect collections. A camera is mounted on motorized rails, which moves in two dimensions over a unit or a drawer. Camera movement is guided by a machine learning object detection program. QR-codes are printed and placed underneath the unit or drawer. QR-codes may contain additional information about each specimen, for example the place it originated from in the collection. Also, the object detection program detects the specimen, and stores its coordinates. The camera mount rotates and tilts, which ensures that the camera may take photographs from all angles and positions. Pictures are transferred into the computer, which calculates a 3D-model with photogrammetry, from which the label text beneath the specimen may be read. This approach requires heavy computation in the segmentation of the top images, and in the creation of a 3D model of the unit, and in extraction of label images of many specimens. Firstly, a sparse point cloud is calculated. Secondly, a dense point cloud is calculated. Finally, a textured mesh is calculated. With machine learning object detection, the top layer, which consists of the insect, may be removed. This leaves the bottom layer with labels visible for later processing by OCR (optical character recognition). This is a new approach to digitise pinned insects in collections. The physical setup is not expensive. Therefore, many systems could be installed in parallel to work overnight to produce the images of tens of drawers. The setup is not physically demanding for the specimens, as they can be left untouched in the unit or drawer. A digital object is created, consisting of label text, unit or drawer QR-code, specimen coordinates in a drawer with unique identifier, and a top-view photo of the specimen. The drawback of this approach is the heavy computing that is needed to create the 3D-models. ENTODIG-3D can currently digitise one sample in five minutes, almost without manual work. Theoretically, potentially sustainable rate is approximately one hundred thousand samples per year. The rate is similar as the current insect digitisation system in Helsinki (Tegelberg & al. 2017), but without the need for manual handling of individual specimens. By adding more computing power, the rate may be increased in linear fashion.


2021 ◽  
Author(s):  
Xinrui Yan ◽  
Yuhao Huang ◽  
Shitao Chen ◽  
Zhixiong Nan ◽  
Jingmin Xin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document