Surface Reconstruction of Freeform Objects Based on Multiresolution Volumetric Method

2003 ◽  
Vol 3 (4) ◽  
pp. 334-338 ◽  
Author(s):  
Sergei Azernikov ◽  
Alex Miropolsky ◽  
Anath Fischer

Recently developed 3D scanning devices are capable of capturing point clouds, as well as additional information, such as normals and texture. This paper describes a new and fast reverse engineering method for creating a 3D computerized model from data captured by such contemporary 3D scanning devices. The proposed method aggregates large-scale 3D scanned data into an extended Hierarchical Space Decomposition Model (HSDM) based on Octree data structure. This model can represent both an object’s boundary surface and its interior volume. The HSDM enables data reduction, while preserving sharp geometrical features and object topology. As a result the execution time of the reconstruction process is significantly reduced. Moreover, the proposed model naturally allows multiresolution surface reconstruction, represented by a mesh with regular properties. Based on the proposed volumetric model, the surface reconstruction process becomes more robust and stable with respect to sampling noise.

Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 79 ◽  
Author(s):  
Xiaoyu Han ◽  
Yue Zhang ◽  
Wenkai Zhang ◽  
Tinglei Huang

Relation extraction is a vital task in natural language processing. It aims to identify the relationship between two specified entities in a sentence. Besides information contained in the sentence, additional information about the entities is verified to be helpful in relation extraction. Additional information such as entity type getting by NER (Named Entity Recognition) and description provided by knowledge base both have their limitations. Nevertheless, there exists another way to provide additional information which can overcome these limitations in Chinese relation extraction. As Chinese characters usually have explicit meanings and can carry more information than English letters. We suggest that characters that constitute the entities can provide additional information which is helpful for the relation extraction task, especially in large scale datasets. This assumption has never been verified before. The main obstacle is the lack of large-scale Chinese relation datasets. In this paper, first, we generate a large scale Chinese relation extraction dataset based on a Chinese encyclopedia. Second, we propose an attention-based model using the characters that compose the entities. The result on the generated dataset shows that these characters can provide useful information for the Chinese relation extraction task. By using this information, the attention mechanism we used can recognize the crucial part of the sentence that can express the relation. The proposed model outperforms other baseline models on our Chinese relation extraction dataset.


Author(s):  
D. Craciun ◽  
A. Serna Morales ◽  
J.-E. Deschaud ◽  
B. Marcotegui ◽  
F. Goulette

The currently existing mobile mapping systems equipped with active 3D sensors allow to acquire the environment with high sampling rates at high vehicle velocities. While providing an effective solution for environment sensing over large scale distances, such acquisition provides only a discrete representation of the geometry. Thus, a continuous map of the underlying surface must be built. Mobile acquisition introduces several constraints for the state-of-the-art surface reconstruction algorithms. Smoothing becomes a difficult task for recovering sharp depth features while avoiding mesh shrinkage. In addition, interpolation-based techniques are not suitable for noisy datasets acquired by Mobile Laser Scanning (MLS) systems. Furthermore, scalability is a major concern for enabling real-time rendering over large scale distances while preserving geometric details. This paper presents a fully automatic ground surface reconstruction framework capable to deal with the aforementioned constraints. The proposed method exploits the quasi-flat geometry of the ground throughout a morphological segmentation algorithm. Then, a planar Delaunay triangulation is applied in order to reconstruct the ground surface. A smoothing procedure eliminates high frequency peaks, while preserving geometric details in order to provide a regular ground surface. Finally, a decimation step is applied in order to cope with scalability constraints over large scale distances. Experimental results on real data acquired in large urban environments are presented and a performance evaluation with respect to ground truth measurements demonstrate the effectiveness of our method.


Author(s):  
S. Kim ◽  
H. G. Kim ◽  
T. Kim

The point cloud generated by multiple image matching is classified as an unstructured point cloud because it is not regularly point spaced and has multiple viewpoints. The surface reconstruction technique is used to generate mesh model using unstructured point clouds. In the surface reconstruction process, it is important to calculate correct surface normals. The point cloud extracted from multi images contains position and color information of point as well as geometric information of images used in the step of point cloud generation. Thus, the surface normal estimation based on the geometric constraints is possible. However, there is a possibility that a direction of the surface normal is incorrectly estimated by noisy vertical area of the point cloud. In this paper, we propose an improved method to estimate surface normals of the vertical points within an unstructured point cloud. The proposed method detects the vertical points, adjust their normal vectors by analyzing surface normals of nearest neighbors. As a result, we have found almost all vertical points through point type classification, detected the points with wrong normal vectors and corrected the direction of the normal vectors. We compared the quality of mesh models generated with corrected surface normals and uncorrected surface normals. Result of comparison showed that our method could correct wrong surface normal successfully of vertical points and improve the quality of the mesh model.


2020 ◽  
Vol 68 (5) ◽  
pp. 337-346
Author(s):  
András Rövid ◽  
Viktor Remeli ◽  
Zsolt Szalay

AbstractEnvironment perception plays a significant role in autonomous driving since all traffic participants in the vehicle’s surroundings must be reliably recognized and localized in order to take any subsequent action. The main goal of this paper is to present a neural network approach for fusing camera images and LiDAR point clouds in order to detect traffic participants in the vehicle’s surroundings more reliably. Our approach primarily addresses the problem of sparse LiDAR data (point clouds of distant objects), where due to sparsity the point cloud based detection might become ambiguous. In the proposed model each 3D point in the LiDAR point cloud is augmented by semantically strong image features allowing us to inject additional information for the network to learn from. Experimental results show that our method increases the number of correctly detected 3D bounding boxes in sparse point clouds by at least 13–21 % and thus raw sensor fusion is validated as a viable approach for enhancing autonomous driving safety in difficult sensory conditions.


2021 ◽  
Author(s):  
Kacper Pluta ◽  
Gisela Domej

<p>The process of transforming point cloud data into high-quality meshes or CAD objects is, in general, not a trivial task. Many problems, such as holes, enclosed pockets, or small tunnels, can occur during the surface reconstruction process, even if the point cloud is of excellent quality. These issues are often difficult to resolve automatically and may require detailed manual adjustments. Nevertheless, in this work, we present a semi-automatic pipeline that requires minimal user-provided input and still allows for high-quality surface reconstruction. Moreover, the presented pipeline can be successfully used by non-specialists and only relies commonly available tools.</p><p>Our pipeline consists of the following main steps: First, a normal field over the point cloud is estimated, and Screened Poisson Surface Reconstruction is applied to obtain the initial mesh. At this stage, the reconstructed mesh usually contains holes, small tunnels, and excess parts – i.e., surface parts that do not correspond to the point cloud geometry. In the next step, we apply morphological and geometrical filtering in order to resolve the problems mentioned before. Some fine details are also removed during the filtration process; however, we show how these can be restored – without reintroducing the problems – using a distance guided projection. In the last step, the filtered mesh is re-meshed to obtain a high-quality triangular mesh, which – if needed – can be converted to a CAD object represented by a small number of quadrangular NURBS patches.</p><p>Our workflow is designed for a point cloud recorded by a laser scanner inside one of seven artificially carved caves resembling chapels with several niches and passages to the outside of a sandstone hill slope in Georgia. We note that we have not tested the approach for other data. Nevertheless, we believe that a similar pipeline can be applied for other types of point cloud data, – e.g., natural caves or mining shafts, geotechnical constructions, rock cliffs, geo-archeological sites, etc. This workflow was created independently, it is not part of a funded project and does not advertise particular software. The case study's point cloud data was used by courtesy of the Dipartimento di Scienze dell'Ambiente e della Terra of the Università degli Studi di Milano–Bicocca.</p>


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Sign in / Sign up

Export Citation Format

Share Document