scholarly journals Effectiveness Analysis of Spline Surfaces Creating Methods for Shell Structures Modelling

2021 ◽  
Vol 20 ◽  
pp. 41-53
Author(s):  
Grzegorz Lenda ◽  
Dominika Spytkowska

The shape of the surface of shell structures, measured by laser scanning, can be modelled using approximating spline functions. Since the 1990s, several modelling techniques have been developed: based on points, meshes, areas outlined on meshes, regions grouping areas with a similar structure. The most effective of them have been used in modern software, but their implementations differ significantly. The most important differences concern the accuracy of modelling, especially places with rapid shape changes, including edges. The differences also affect the mathematical complexity of the created model (the number of unknowns) and the time of its development. These factors contribute to the effectiveness of modelling. Some methods work fully automatically, others allow manual selection of certain parameters, there are also methods that require full manual control. Their selection and application is greatly affected by the user’s intuition and knowledge in the field of creating such surfaces. This study tested the influence of the above factors on the modelling efficiency. A total of six methods of creating spline surfaces were analysed in three software packages of different classes: Geomagic Design X, Solidworks and RhinoResurf. The analyses were carried out on a shell structure of complex shape, consisting of seven patches separated by edges. The created models were assessed in terms of their accuracy of fitting into the point cloud. Additionally, the complexity of the model expressed in the number of control points and the time of its development were determined. The results confirmed the validity of the four methods in terms of model fitting accuracy. The best results were achieved using the semi-automatic method in the most advanced software package and the manual method in the simplest package. This has confirmed the great importance of user experience in terms of theoretical properties of spline functions. However, complexity and development time did not show a direct relationship with the accuracy of the models created. ANALIZA EFEKTYWNOŚCI METOD TWORZENIA POWIERZCHNI SKLEJANYCH DLA MODELOWANIA OBIEKTÓW POWŁOKOWYCH Modelowanie kształtu powierzchni obiektów powłokowych, pomierzonych za pomocą skaningu laserowego, można przeprowadzić za pomocą aproksymacyjnych funkcji sklejanych. Funkcje te dobrze przybliżają kształty o ciągłej krzywiźnie, jakimi są powłoki, jednocześnie wykazując spadki dokładności w miejscach zerwania tej ciągłości. Od lat 90. XX wieku rozwinęło się kilka technik modelowania za ich pomocą, m.in.: wykorzystujących same punkty, siatki mesh, obszary obrysowane na siatkach mesh, regiony grupujące obszary o podobnej strukturze. Najbardziej skuteczne z nich zostały zastosowane we współczesnym oprogramowaniu, ale ich implementacje znacząco się pomiędzy sobą różnią. Najważniejsze różnice dotyczą dokładności modelowania, szczególnie miejsc o szybkich zmianach kształtu, włączając w nie krawędzie. Różnice dotyczą też złożoności matematycznej utworzonego modelu (liczby niewiadomych) oraz czasu jego opracowania. Czynniki te składają się na efektywność modelowania. Część metod działa w pełni automatycznie, inne pozwalają na ręczny dobór pewnych parametrów, są też metody wymagające pełnego sterowania ręcznego. W ich wyborze i stosowaniu duże znaczenie ma intuicja i wiedza użytkownika w zakresie tworzenia tego typu powierzchni. W opracowaniu przetestowano wpływ powyższych czynników na efektywność modelowania. Badaniom poddano łącznie sześć metod tworzenia powierzchni sklejanych w trzech pakietach oprogramowania różnej klasy: Geomagic Design X, Solidworks i RhinoResurf. Analizy przeprowadzono na obiekcie powłokowym o złożonym kształcie, składającym się z siedmiu płatów rozdzielonych krawędziami. Został on pomierzony metodą skaningu laserowego, a scalona chmura punktów stanowiła podstawę do modelowania za pomocą funkcji sklejanych. Utworzone modele oceniono pod względem dokładności wpasowania w chmurę punktów za pomocą wykresów odchyłek punktów od powierzchni, odchyłek średnich oraz maksymalnych. Dodatkowo określono złożoność modelu wyrażoną liczbą punktów kontrolnych oraz czas jego opracowania. Wyniki pozwoliły na potwierdzenie skuteczności czterech metod w zakresie dokładności wpasowania modeli. Najlepsze efekty osiągnięto stosując metodę półautomatyczną w najbardziej zaawansowanym pakiecie oprogramowania oraz metodę ręczną w najprostszym z pakietów. Potwierdza to duże znaczenie doświadczenia użytkownika w zakresie teoretycznych własności funkcji sklejanych. Złożoność i czas opracowania nie wykazywały natomiast bezpośredniego związku z dokładnością tworzonych modeli.

1989 ◽  
Vol 19 (1) ◽  
pp. 91-122 ◽  
Author(s):  
G. C. Taylor

AbstractThe paper gives details of a case study in the premium rating of a Householders Contents insurance portfolio. The rating is performed by the fitting of bivariate spline functions to a version of operating ratio described in Section 3.The use of bivariate splines requires a small amount of mathematical equipment, which is developed in Section 4. The fitting of splines, using regression is carried out in Sections 5 and 6, where the numerical results are given, including some assessment of goodness-of-fit.Contour maps of the spline surfaces are also given, and used for the selection of geographic areas used for premium rating purposes. These are compared with the areas, past and present, actually used by the insurer concerned.


2021 ◽  
Vol 112 (1) ◽  
pp. 27-33
Author(s):  
Grzegorz Lenda ◽  
Katarzyna Abrachamowicz

Abstract This research paper tackles the problem of determining displacements of complex-shaped shell structures, measured periodically using laser scanning. Point clouds obtained during different measurement epochs can be compared with each other directly or they can be converted into continuous models in the form of a triangle mesh or smooth patches (spline functions). The accuracy of the direct comparison of point clouds depends on the scanning density, while the accuracy of comparing the point cloud to the model depends on approximation errors that are formed during its creation. Modelling using triangle meshes flattens the local structure of the object compared to the spline model. However, if the shell has edges in its structure, their exact representation by spline models is impossible due to the undulations of functions along them. Edges can also be distorted by the mesh model by their chamfering with transverse triangles. These types of surface modelling errors can lead to the generation of pseudo-deformation of the structure, which is difficult to distinguish from real deformation. In order to assess the possibility of correct determination of deformation using the above-mentioned methods, laser scanning of a complex shell structure in two epochs was performed. Then, modelling and comparison of the results of periodic measurements were carried out. As a result of the research, advantages and disadvantages of each method were identified. It was noticed that none of the methods made it possible to correctly represent all deformations while suppressing pseudo-deformation. However, the combination of their best qualities made it possible to determine the actual deformation of the structure.


Author(s):  
Karolina Parkitna ◽  
Grzegorz Krok ◽  
Stanisław Miścicki ◽  
Krzysztof Ukalski ◽  
Marek Lisańczuk ◽  
...  

Abstract Airborne laser scanning (ALS) is one of the most innovative remote sensing tools with a recognized important utility for characterizing forest stands. Currently, the most common ALS-based method applied in the estimation of forest stand characteristics is the area-based approach (ABA). The aim of this study was to analyse how three ABA methods affect growing stock volume (GSV) estimates at the sample plot and forest stand levels. We examined (1) an ABA with point cloud metrics, (2) an ABA with canopy height model (CHM) metrics and (3) an ABA with aggregated individual tree CHM-based metrics. What is more, three different modelling techniques: multiple linear regression, boosted regression trees and random forest, were applied to all ABA methods, which yielded a total of nine combinations to report. An important element of this work is also the empirical verification of the methods for estimating the GSV error for individual forest stand. All nine combinations of the ABA methods and different modelling techniques yielded very similar predictions of GSV for both sample plots and forest stands. The root mean squared error (RMSE) of estimated GSV ranged from 75 to 85 m3 ha−1 (RMSE% = 20.5–23.4 per cent) and from 57 to 64 m3 ha−1 (RMSE% = 16.4–18.3 per cent) for plots and stands, respectively. As a result of the research, it can be concluded that GSV modelling with the use of different ALS processing approaches and statistical methods leads to very similar results. Therefore, the choice of a GSV prediction method may be more determined by the availability of data and competences than by the requirement to use a particular method.


Author(s):  
Amrita Goswamy ◽  
Shauna Hallmark ◽  
Theresa Litteral ◽  
Michael Pawlovich

Intersection crashes during nighttime hours may occur because of poor driver visual cognition of conflicting traffic or intersection presence. In rural areas, the only source of lighting is typically provided by vehicle headlights. Roadway lighting enhances driver recognition of intersection presence and visibility of signs and markings. Destination lighting provides some illumination for the intersection but is not intended to fully illuminate all approaches. Destination lighting has been widely used in Iowa but the effectiveness has not been well documented. This study, therefore, sought to evaluate the effect on safety of destination lighting at rural intersections. As part of an extensive data collection effort, locations with destination/street lighting were gathered with the assistance of several state agencies. After manual selection of a similar number of control intersections, propensity score matching using the caliper width technique was used to match 245 treatments with 245 control sites. Negative binomial regression was used to evaluate crash frequency data. The presence of destination lighting at stop-controlled cross-intersections generally reduced the night-to-day crash ratio by 19%. The presence of treatment or destination lighting was associated with a 33%–39% increase in daytime crashes across all models but was associated with an 18%–33% reduction in nighttime crashes. Injuries in nighttime crashes decreased by 24% and total nighttime crashes reduced by 33%. Property damage crashes were reduced by 18%.


Author(s):  
S. Artese ◽  
J. L. Lerma ◽  
J. Aznar Molla ◽  
R. M. Sánchez ◽  
R. Zinno

<p><strong>Abstract.</strong> The three-dimensional (3D) documentation and surveying of cultural heritage can be carried out following several geomatics techniques such as laser scanning and thermography in order to detect the original 3D shape after applying reverse engineering solutions. In almost all cases, the integration of data collected by different instruments is needed to achieve a successful and comprehensive 3D model of the as-built architectural shape of the historical building. This paper describes the operations carried out by the authors to determine the as-built 3D model of the Escuelas Pias Church, related namely to the dome and circular nave. After the description of the church and historical notes, attention will be driven to the indirect registration results obtained with three different laser scanning software packages, highlighting similarities and differences, and the consequences while generating meshes. The 3D model carried out will then be described and the results of some investigations with regard to the hypotheses about the design of the dome and the origin of the alterations will be presented.</p>


Spatium ◽  
2016 ◽  
pp. 30-36 ◽  
Author(s):  
Petar Pejic ◽  
Sonja Krasic

Digital three-dimensional models of the existing architectonic structures are created for the purpose of digitalization of the archive documents, presentation of buildings or an urban entity or for conducting various analyses and tests. Traditional methods for the creation of 3D models of the existing buildings assume manual measuring of their dimensions, using the photogrammetry method or laser scanning. Such approaches require considerable time spent in data acquisition or application of specific instruments and equipment. The goal of this paper is presentation of the procedure for the creation of 3D models of the existing structures using the globally available web resources and free software packages on standard PCs. This shortens the time of the production of a digital three-dimensional model of the structure considerably and excludes the physical presence at the location. In addition, precision of this method was tested and compared with the results acquired in a previous research.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Guanghui Liang ◽  
Jianmin Pang ◽  
Zheng Shan ◽  
Runqing Yang ◽  
Yihang Chen

To address emerging security threats, various malware detection methods have been proposed every year. Therefore, a small but representative set of malware samples are usually needed for detection model, especially for machine-learning-based malware detection models. However, current manual selection of representative samples from large unknown file collection is labor intensive and not scalable. In this paper, we firstly propose a framework that can automatically generate a small data set for malware detection. With this framework, we extract behavior features from a large initial data set and then use a hierarchical clustering technique to identify different types of malware. An improved genetic algorithm based on roulette wheel sampling is implemented to generate final test data set. The final data set is only one-eighteenth the volume of the initial data set, and evaluations show that the data set selected by the proposed framework is much smaller than the original one but does not lose nearly any semantics.


Author(s):  
Mehmet Bilge Kağan Önaçan ◽  
Mesut Uluağ ◽  
Tolga Önel ◽  
Tunç Durmuş Medeni

Plagiarism detection software packages have an important role in detection of plagiarism in exams, assignments, projects, and scientific researches. The main goal of this chapter is the selection of plagiarism detection software (PDS) and its integration into Moodle, an open source learning management system (LMS), for the use of a higher education institution. For this reason, first, the selection criteria are determined by nominal group technique (NGT) and then the most appropriate PDS is selected. At the end of the study, Crot, an open source PDS, is determined and integrated into Moodle. The suggested selection criteria would be useful for other higher education institutions in Turkey and other countries that rely on open software.


Sign in / Sign up

Export Citation Format

Share Document