scholarly journals PQ-Transformer: Jointly Parsing 3D Objects and Layouts from Point Clouds

Author(s):  
Xiaoxue Chen ◽  
Hao Zhao ◽  
Guyue Zhou ◽  
Ya-Qin Zhang
Keyword(s):  
IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Daniel Varga ◽  
Janos Mark Szalai-Gindl ◽  
Bence Formanek ◽  
Peter Vaderna ◽  
Laszlo Dobos ◽  
...  

Actuators ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 13
Author(s):  
Hao Geng ◽  
Zhiyuan Gao ◽  
Guorun Fang ◽  
Yangmin Xie

Dense scanning is an effective solution for refined geometrical modeling applications. The previous studies in dense environment modeling mostly focused on data acquisition techniques without emphasizing autonomous target recognition and accurate 3D localization. Therefore, they lacked the capability to output semantic information in the scenes. This article aims to make complementation in this aspect. The critical problems we solved are mainly in two aspects: (1) system calibration to ensure detail-fidelity for the 3D objects with fine structures, (2) fast outlier exclusion to improve 3D boxing accuracy. A lightweight fuzzy neural network is proposed to remove most background outliers, which was proven in experiments to be effective for various objects in different situations. With precise and clean data ensured by the two abovementioned techniques, our system can extract target objects from the original point clouds, and more importantly, accurately estimate their center locations and orientations.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2302 ◽  
Author(s):  
Fabio Oliveira ◽  
Anderson Souza ◽  
Marcelo Fernandes ◽  
Rafael Gomes ◽  
Luiz Goncalves

Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods.


2009 ◽  
Vol 22 (2) ◽  
pp. 235-243 ◽  
Author(s):  
Rainer Grimmer ◽  
Björn Eskofier ◽  
Heiko Schlarb ◽  
Joachim Hornegger

Author(s):  
K. Kawakami ◽  
K. Hasegawa ◽  
L. Li ◽  
H. Nagata ◽  
M. Adachi ◽  
...  

Abstract. The recent development of 3D scanning technologies has made it possible to quickly and accurately record various 3D objects in the real world. The 3D scanned data take the form of large-scale point clouds, which describe complex 3D structures of the target objects and the surrounding scenes. The complexity becomes significant in cases that a scanned object has internal 3D structures, and the acquired point cloud is created by merging the scanning results of both the interior and surface shapes. To observe the whole 3D structure of such complex point-based objects, the point-based transparent visualization, which we recently proposed, is useful because we can observe the internal 3D structures as well as the surface shapes based on high-quality see-through 3D images. However, transparent visualization sometimes shows us too much information so that the generated images become confusing. To address this problem, in this paper, we propose to combine “edge highlighting” with transparent visualization. This combination makes the created see-through images quite understandable because we can highlight the 3D edges of visualized shapes as high-curvature areas. In addition, to make the combination more effective, we propose a new edge highlighting method applicable to 3D scanned point clouds. We call the method “opacity-based edge highlighting,” which appropriately utilizes the effect of transparency to make the 3D edge regions look clearer. The proposed method works well for both sharp (high-curvature) and soft (low-curvature) 3D edges. We show several experiments that demonstrate our method’s effectiveness by using real 3D scanned point clouds.


Author(s):  
S. Goebbels ◽  
C. Dalitz

Abstract. The paper deals with the 3D reconstruction of bridges from Airborne Laser Scanning point clouds and cadastral footprints. The generated realistic 3D objects can be used to enhance city models. While other studies have focused on bridge decks to fill gaps in digital elevation models, this paper focuses on the decomposition of superstructures into construction elements such as pylons, cables and arches. For this purpose, the bridge type is classified, and a combination of model-based and data-based methods is used that are built on the detection of arcs, catenaries, and line segments in the point clouds. The described techniques were successfully applied to create 3D models of the Rhine bridges in the German state of North Rhine-Westphalia.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5850
Author(s):  
Wei Li ◽  
Hongtai Cheng ◽  
Xiaohua Zhang

Recognizing 3D objects and estimating their postures in a complex scene is a challenging task. Sample Consensus Initial Alignment (SAC-IA) is a commonly used point cloud-based method to achieve such a goal. However, its efficiency is low, and it cannot be applied in real-time applications. This paper analyzes the most time-consuming part of the SAC-IA algorithm: sample generation and evaluation. We propose two improvements to increase efficiency. In the initial aligning stage, instead of sampling the key points, the correspondence pairs between model and scene key points are generated in advance and chosen in each iteration, which reduces the redundant correspondence search operations; a geometric filter is proposed to prevent the invalid samples to the evaluation process, which is the most time-consuming operation because it requires transforming and calculating the distance between two point clouds. The introduction of the geometric filter can significantly increase the sample quality and reduce the required sample numbers. Experiments are performed on our own datasets captured by Kinect v2 Camera and on Bologna 1 dataset. The results show that the proposed method can significantly increase (10–30×) the efficiency of the original SAC-IA method without sacrificing accuracy.


2019 ◽  
Vol 13 (4) ◽  
pp. 464-474
Author(s):  
Shinichi Sumiyoshi ◽  
◽  
Yuichi Yoshida

While several methods have been proposed for detecting three-dimensional (3D) objects in semi-real time by sparsely acquiring features from 3D point clouds, the detection of strongly occluded objects still poses difficulties. Herein, we propose a method of detecting strongly occluded objects by setting up virtual auxiliary point clouds in the vicinity of the target object. By generating auxiliary point clouds only in the occluded space estimated from a detected object at the front of the sensor-observed region, i.e., the occluder, the processing efficiency and accuracy are improved. Experiments are performed with various strongly occluded scenes based on real environmental data, and the results confirm that the proposed method is capable of achieving a mean processing time of 0.5 s for detecting strongly occluded objects.


Author(s):  
J. Yan ◽  
S. Zlatanova ◽  
M. Aleksandrov ◽  
A. A. Diakite ◽  
C. Pettit

<p><strong>Abstract.</strong> 3D modelling of precincts and cities has significantly advanced in the last decades, as we move towards the concept of the Digital Twin. Many 3D city models have been created but a large portion of them neglect representing terrain and buildings accurately. Very often the surface is either considered planar or is not represented. On the other hand, many Digital Terrain Models (DTM) have been created as 2.5D triangular irregular networks (TIN) or grids for different applications such as water management, sign of view or shadow computation, tourism, land planning, telecommunication, military operations and communications. 3D city models need to represent both the 3D objects and terrain in one consistent model, but still many challenges remain. A critical issue when integrating 3D objects and terrain is the identification of the valid intersection between 2.5D terrain and 3D objects. Commonly, 3D objects may partially float over or sink into the terrain; the depth of the underground parts might not be known; or the accuracy of data sets might be different. This paper discusses some of these issues and presents an approach for a consistent 3D reconstruction of LOD1 models on the basis of 3D point clouds, DTM, and 2D footprints of buildings. Such models are largely used for urban planning, city analytics or environmental analysis. The proposed method can be easily extended for higher LODs or BIM models.</p>


Sign in / Sign up

Export Citation Format

Share Document