scholarly journals PROBABILISTIC FEASIBILITY OF THE RECONSTRUCTION PROCESS OF RUSSIAN-ORTHODOX CHURCHES

Author(s):  
M. Chizhova ◽  
A. Brunn ◽  
U. Stilla

The cultural human heritage is important for the identity of following generations and has to be preserved in a suitable manner. In the course of time a lot of information about former cultural constructions has been lost because some objects were strongly damaged by natural erosion or on account of human work or were even destroyed. It is important to capture still available building parts of former buildings, mostly ruins. This data could be the basis for a virtual reconstruction. Laserscanning offers in principle the possibility to take up extensively surfaces of buildings in its actual status. <br><br> In this paper we assume a priori given 3d-laserscanner data, 3d point cloud for the partly destroyed church. There are many well known algorithms, that describe different methods of extraction and detection of geometric primitives, which are recognized separately in 3d points clouds. In our work we put them in a common probabilistic framework, which guides the complete reconstruction process of complex buildings, in our case russian-orthodox churches. <br><br> Churches are modeled with their functional volumetric components, enriched with a priori known probabilities, which are deduced from a database of russian-orthodox churches. Each set of components represents a complete church. The power of the new method is shown for a simulated dataset of 100 russian-orthodox churches.

Author(s):  
M. Chizhova ◽  
A. Brunn ◽  
U. Stilla

The cultural human heritage is important for the identity of following generations and has to be preserved in a suitable manner. In the course of time a lot of information about former cultural constructions has been lost because some objects were strongly damaged by natural erosion or on account of human work or were even destroyed. It is important to capture still available building parts of former buildings, mostly ruins. This data could be the basis for a virtual reconstruction. Laserscanning offers in principle the possibility to take up extensively surfaces of buildings in its actual status. <br><br> In this paper we assume a priori given 3d-laserscanner data, 3d point cloud for the partly destroyed church. There are many well known algorithms, that describe different methods of extraction and detection of geometric primitives, which are recognized separately in 3d points clouds. In our work we put them in a common probabilistic framework, which guides the complete reconstruction process of complex buildings, in our case russian-orthodox churches. <br><br> Churches are modeled with their functional volumetric components, enriched with a priori known probabilities, which are deduced from a database of russian-orthodox churches. Each set of components represents a complete church. The power of the new method is shown for a simulated dataset of 100 russian-orthodox churches.


Author(s):  
Xiang Yang ◽  
Peter Meer ◽  
Hae Chang Gea

A robust method for surface fitting in 3D point cloud is presented as an application of the robust estimation of multiple in-lier structures algorithm [1]. The geometric primitives such as planes, spheres and cylinders are detected from the point samples in the noisy dataset, without regenerating surface normals or mesh. The inlier points of different surfaces are classified and segmented, with the tolerance of error for each surface estimated adaptively from the input data. From the segmented points, designers can interact with the geometric primitives conveniently. Direct modification of 3D point cloud and inverse design of solid model can be applied. Both synthetic and real point cloud datasets are tested for the use of the robust algorithm.


Author(s):  
Paolo Dulio ◽  
Andrea Frosini ◽  
Simone Rinaldi ◽  
Lama Tarsissi ◽  
Laurent Vuillon

AbstractA remarkable family of discrete sets which has recently attracted the attention of the discrete geometry community is the family of convex polyominoes, that are the discrete counterpart of Euclidean convex sets, and combine the constraints of convexity and connectedness. In this paper we study the problem of their reconstruction from orthogonal projections, relying on the approach defined by Barcucci et al. (Theor Comput Sci 155(2):321–347, 1996). In particular, during the reconstruction process it may be necessary to expand a convex subset of the interior part of the polyomino, say the polyomino kernel, by adding points at specific positions of its contour, without losing its convexity. To reach this goal we consider convexity in terms of certain combinatorial properties of the boundary word encoding the polyomino. So, we first show some conditions that allow us to extend the kernel maintaining the convexity. Then, we provide examples where the addition of one or two points causes a loss of convexity, which can be restored by adding other points, whose number and positions cannot be determined a priori.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


Sign in / Sign up

Export Citation Format

Share Document