object models
Recently Published Documents


TOTAL DOCUMENTS

385
(FIVE YEARS 39)

H-INDEX

21
(FIVE YEARS 2)

2021 ◽  
Vol 12 (1) ◽  
pp. 206-218
Author(s):  
Victor Gouveia de M. Lyra ◽  
Adam H. M. Pinto ◽  
Gustavo C. R. Lima ◽  
João Paulo Lima ◽  
Veronica Teichrieb ◽  
...  

With the growth of access to faster computers and more powerful cameras, the 3D reconstruction of objects has become one of the public's main topics of research and demand. This task is vigorously applied in creating virtual environments, creating object models, and other activities. One of the techniques for obtaining 3D features is photogrammetry, mapping objects and scenarios using only images. However, this process is very costly and can be pretty time-consuming for large datasets. This paper proposes a robust, efficient reconstruction pipeline with a low runtime in batch processing and permissive code. It is even possible to commercialize it without the need to keep the code open. We mix an improved structure from motion algorithm and a recurrent multi-view stereo reconstruction. We also use the Point Cloud Library for normal estimation, surface reconstruction, and texture mapping. We compare our results with state-of-the-art techniques using benchmarks and our datasets. The results showed a decrease of 69.4% in the average execution time, with high quality but a greater need for more images to achieve complete reconstruction.


2021 ◽  
Vol 11 (14) ◽  
pp. 6251
Author(s):  
Kirill Krinkin ◽  
Alexander Vodyaho ◽  
Igor Kulikov ◽  
Nataly Zhukova

The paper introduces a method for adaptive deductive synthesis of state models, of complex objects, with multilevel variable structures. The method makes it possible to predict the state of objects using the data coming from them. The data from the objects are collected with sensors installed on them. Multilevel knowledge graphs (KG) are used to describe the observed objects. The new adaptive synthesis method develops previously proposed inductive and deductive synthesis methods, allowing the context to be taken into account when predicting the states of the monitored objects based on the data obtained from them. The article proposes the algorithm for the suggested method and presents its computational complexity analysis. The software system, based on the proposed method, and the algorithm for multilevel adaptive synthesis of the object models developed, are described in the article. The effectiveness of the proposed method is shown in the results from modeling the states of telecommunication networks of cable television operators.


2021 ◽  
Vol 20 ◽  
pp. 244-254
Author(s):  
Adnan A. Mustafa

The task of data matching arises frequently in many aspects of science. It can become a time consuming process when the data is being matched to a huge database consisting of thousands of possible candidates, and the goal is to find the best match. It can be even more time consuming if the data are big (> 100 MB). One approach to reducing the time complexity of the matching process is to reduce the search space by introducing a pre-matching stage, where very dissimilar data are quickly removed. In this paper we focus our attention to matching big binary data. In this paper we present two probabilistic models for the quick dissimilarity detection of big binary data: the Probabilistic Model for Quick Dissimilarity Detection of Binary vectors (PMQDD) and the Inverse-equality Probabilistic Model for Quick Dissimilarity Detection of Binary vectors (IPMQDD). Dissimilarity detection between binary vectors can be accomplished quickly by random element mapping. The detection technique is not a function of data size and hence dissimilarity detection is performed quickly. We treat binary data as binary vectors, and hence any binary data of any size and dimension is treated as a binary vector. PMQDD is based on a binary similarity distance that does not recognize data and its exact inverse as containing the same pattern and hence considers them to be different. However, in some applications a specific data and its inverse, are regarded as the same pattern, and thus should be identified as being the same; IPMQDD is able to identify such cases, as it is based on a similarity distance that does not distinguish between data and its inverse instance as being dissimilar. We present a comparative analysis between PMQDD and IPMQDD, as well as their similarity distances. We present an application of the models to a set of object models, that show the effectiveness and power of these models


2021 ◽  
Vol 13 (7) ◽  
pp. 3905
Author(s):  
Vjačeslav Usmanov ◽  
Jan Illetško ◽  
Rostislav Šulc

The trend of using modern technologies in the construction industry has been growing stronger recently, particularly in the fields of additive construction or robotic bricklaying. Therefore, specifically for the purpose of robotic bricklaying, we created a digital layout plan for robotic construction works. This article presents a universal methodology for creating a bricklaying plan for various variations of wall building systems. The method is based on the conversion of drawings from the BIM (Building Information Model) environment to the BREP (Boundary Representation) model through use of the IFC (Industry Foundation Classes) format, which simultaneously divides object models into layers and connects discontinuous wall axes by means of an orthogonal arrangement and inserting details into critical structural points. Among other aspects, the developed algorithm proposes the optimal placement of the robotic system inside objects under construction, in order to minimize the distance of the robot’s movement and to reduce its electricity consumption. Digital layout plans created in this way are expected to serve as a stepping stone for robotic bricklaying.


2021 ◽  
Author(s):  
Raffaella Brumana ◽  
Chiara Stanga ◽  
Fabrizio Banfi

AbstractThe paper focuses on new opportunities of knowledge sharing, and comparison, thanks to the circulation and re-use of heritage HBIM models by means of Object Libraries within a Common Data Environment (CDE) and remotely-accessible Geospatial Virtual Hubs (GVH). HBIM requires a transparent controlled quality process in the model generation and its management to avoid misuses of such models once available in the cloud, freeing themselves from object libraries oriented to new buildings. The model concept in the BIM construction process is intended to be progressively enriched with details defined by the Level of Geometry (LOG) while crossing the different phases of development (LOD), from the pre-design to the scheduled maintenance during the long life cycle of buildings and management (LLCM). In this context, the digitization process—from the data acquisition until the informative models (scan-to-HBIM method)—requires adapting the definition of LOGs to the different phases characterizing the heritage preservation and management, reversing the new construction logic based on simple-to-complex informative models. Accordingly, a deeper understanding of the geometry and state of the art (as-found) should take into account the complexity and uniqueness of the elements composing the architectural heritage since the starting phases of the analysis, adopting coherent object modeling that can be simplified for different purposes as in the construction site and management over time. For those reasons, the study intends (i) to apply the well-known concept of scale to the object model generation, defining different Grades of Accuracy (GOA) related to the scales (ii) to start fixing sustainable roles to guarantee a free choice by the operators in the generation of object models, and (iii) to validate the model generative process with a transparent communication of indicators to describe the richness in terms of precision and accuracy of the geometric content here declined for masonry walls and vaults, and (iv) to identifies requirements for reliable Object Libraries.


Author(s):  
Weimin Zhou ◽  
Sayantan Bhadra ◽  
Frank J. Brooks ◽  
Jason L. Granstedt ◽  
Hua Li ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Michael C. Welle ◽  
Anastasiia Varava ◽  
Jeffrey Mahler ◽  
Ken Goldberg ◽  
Danica Kragic ◽  
...  

AbstractCaging grasps limit the mobility of an object to a bounded component of configuration space. We introduce a notion of partial cage quality based on maximal clearance of an escaping path. As computing this is a computationally demanding task even in a two-dimensional scenario, we propose a deep learning approach. We design two convolutional neural networks and construct a pipeline for real-time planar partial cage quality estimation directly from 2D images of object models and planar caging tools. One neural network, CageMaskNN, is used to identify caging tool locations that can support partial cages, while a second network that we call CageClearanceNN is trained to predict the quality of those configurations. A partial caging dataset of 3811 images of objects and more than 19 million caging tool configurations is used to train and evaluate these networks on previously unseen objects and caging tool configurations. Experiments show that evaluation of a given configuration on a GeForce GTX 1080 GPU takes less than 6 ms. Furthermore, an additional dataset focused on grasp-relevant configurations is curated and consists of 772 objects with 3.7 million configurations. We also use this dataset for 2D Cage acquisition on novel objects. We study how network performance depends on the datasets, as well as how to efficiently deal with unevenly distributed training data. In further analysis, we show that the evaluation pipeline can approximately identify connected regions of successful caging tool placements and we evaluate the continuity of the cage quality score evaluation along caging tool trajectories. Influence of disturbances is investigated and quantitative results are provided.


Sign in / Sign up

Export Citation Format

Share Document