scholarly journals Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 636 ◽  
Author(s):  
Joacim Dybedal ◽  
Atle Aalerud ◽  
Geir Hovland

This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. The main advantage of processing point cloud data locally on the nodes is scalability. The proposed solution could, with a dedicated Gigabit Ethernet local network, be scaled up to approximately 440 sensor nodes, only limited by the processing power of the central node that is receiving the compressed data from the local nodes. A compression ratio of 40.5 was obtained when compressing a point cloud stream from a single Microsoft Kinect V2 sensor using an octree resolution of 4 cm.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Alexander Bigalke ◽  
Lasse Hansen ◽  
Jasper Diesel ◽  
Mattias P. Heinrich

Abstract Purpose Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. Methods We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient’s volumetric surface without a cover. Second, the patient’s weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. Results We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to $${16}{\%}$$ 16 % and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to $${52}{\%}$$ 52 % . Conclusion We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 9 (2) ◽  
pp. 72 ◽  
Author(s):  
Sami El-Mahgary ◽  
Juho-Pekka Virtanen ◽  
Hannu Hyyppä

The importance of being able to separate the semantics from the actual (X,Y,Z) coordinates in a point cloud has been actively brought up in recent research. However, there is still no widely used or accepted data layout paradigm on how to efficiently store and manage such semantic point cloud data. In this paper, we present a simple data layout that makes use the semantics and that allows for quick queries. The underlying idea is especially suited for a programming approach (e.g., queries programmed via Python) but we also present an even simpler implementation of the underlying technique on a well known relational database management system (RDBMS), namely, PostgreSQL. The obtained query results suggest that the presented approach can be successfully used to handle point and range queries on large points clouds.


2020 ◽  
Vol 12 (11) ◽  
pp. 1800 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen

The processing of remote sensing measurements to Building Information Modeling (BIM) is a popular subject in current literature. An important step in the process is the enrichment of the geometry with the topology of the wall observations to create a logical model. However, this remains an unsolved task as methods struggle to deal with the noise, incompleteness and the complexity of point cloud data of building scenes. Current methods impose severe abstractions such as Manhattan-world assumptions and single-story procedures to overcome these obstacles, but as a result, a general data processing approach is still missing. In this paper, we propose a method that solves these shortcomings and creates a logical BIM model in an unsupervised manner. More specifically, we propose a connection evaluation framework that takes as input a set of preprocessed point clouds of a building’s wall observations and compute the best fit topology between them. We transcend the current state of the art by processing point clouds of both straight, curved and polyline-based walls. Also, we consider multiple connection types in a novel reasoning framework that decides which operations are best fit to reconstruct the topology of the walls. The geometry and topology produced by our method is directly usable by BIM processes as it is structured conform the IFC data structure. The experimental results conducted on the Stanford 2D-3D-Semantics dataset (2D-3D-S) show that the proposed method is a promising framework to reconstruct complex multi-story wall elements in an unsupervised manner.


Author(s):  
M. Bassier ◽  
R. Klein ◽  
B. Van Genechten ◽  
M. Vergauwen

The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.<br>In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.


Author(s):  
F. Poux ◽  
R. Neuville ◽  
P. Hallot ◽  
R. Billen

This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.


Sign in / Sign up

Export Citation Format

Share Document