scholarly journals A Synthetic LiDAR Scanner for VTK

2009 ◽  
Author(s):  
David Doria

In recent years, Light Detection and Ranging (LiDAR) scanners have become more prevalent in the scientific community. They capture a “2.5-D” image of a scene by sending out thousands of laser pulses and using time-of-flight calculations to determine the distance to the first reflecting surface in the scene. Rather than setting up a collection of objects in real life and actually sending lasers into the scene, one can simply create a scene out of 3d models and “scan” it by casting rays at the models. This is a great resource for any researchers who work with 3D model/surface/point data and LiDAR data. The synthetic scanner can be used to produce data sets for which a ground truth is known in order to ensure algorithms are behaving properly before moving to “real” LiDAR scans. Also, noise can be added to the points to attempt to simulate a real LiDAR scan for researchers who do not have access to the very expensive equipment required to obtain real scans.


2021 ◽  
Vol 10 (88) ◽  

With the rapid advances in visual perception and processing technologies, it has become easier to create 3D models (three dimensional visuals that have width height and depth data) of objects by processing 2D (two dimensional images that have width and height data like photography) images obtained from real life with the help of certain algorithms. These systems, which convert from two-dimensional painting to three-dimensional model format, now describe and translate most objects correctly. Like photogrametry and laser scanning, is used to quickly transfer large areas to 3D media, especially with coating materials. 3D images obtained by scanning 2D images show differences in terms of the obtained model quality and polygon density. This system, which serves to obtain very fast 3D models, is frequently used in computer games development, digital art and production / cinema studies, painting, sculpting, ceramic and photography to obtain a spesific result. In the research, image-based 3D model creation technologies were mentioned. The types of this technology and its usage purposes, methods and problems are the topics of this article Also problems faced while engaging the models accured from this methods to other platforms are included in the article. In this context, the aim of the study is to recognize the new scanning modeling processes and algorithms supported by artificial intelligence and to determine the usage areas of these modeling techniques in art. Keywords: Art, 3D Model, A.I., LIDAR, Photogrametry, Digital Art



2021 ◽  
Vol 2086 (1) ◽  
pp. 012077
Author(s):  
P D Badillo ◽  
V A Parfenov ◽  
N L Shchegoleva

Abstract 3D scanning is widely used in multiple applications to obtain high precision / non-destructive documentation of real-life objects, which is especially important in Cultural Heritage (CH) preservation. However, some issues (in particular missing parts which are commonly known as “holes”) affect the accuracy of the obtained 3D model after the scanning procedure and requires time-consuming post-processing procedures, which include manual editing by highly-trained personnel. In this article an automatic method to reconstruct the obtained surface of 3D models is proposed, improving previously obtained results for high-density point clouds.



2015 ◽  
Vol 733 ◽  
pp. 931-934 ◽  
Author(s):  
Ji Lai Zhou ◽  
Ming Quan Zhou ◽  
Guo Hua Geng

This paper presents a new algorithm to retrieve 3D model on distance classification histogram. First, we select the certain number of random points on the model surface and compute the distance between two random points. Secondly, we sort the distance into two types which is based on the different geometry properties of these distance and construct the distance classification histogram. Finally, we measure the similarity of 3D models by comparing distance classification histogram. The experimental results on PSB show that our method has a good performance in precision and computational complication.



2014 ◽  
Vol 487 ◽  
pp. 389-393
Author(s):  
Jian Zhao Zhou ◽  
Li Qun Han ◽  
Xiao Pan Xu ◽  
Wei Jun Chu

Using NURBS modeling method can construct more realistic and vivid 3D models, because it has a remarkable ability to control the curves of the model surface than the traditional grid modeling method. Therefore, on analysis of the primary principles of NURBS, this paper adopted professional modeling software 3DS MAX to explore the methods and techniques of NURBS in the complex surface building process. And then, the techniques and methods were applied to construct practical engineering machinery, completing the 3D model of bulldozer case.



Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.



2021 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Roland Perko ◽  
Manfred Klopschitz ◽  
Alexander Almer ◽  
Peter M. Roth

Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols.



Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.



2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Mustafa Yuksel ◽  
Suat Gonul ◽  
Gokce Banu Laleci Erturkmen ◽  
Ali Anil Sinaci ◽  
Paolo Invernizzi ◽  
...  

Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information.



2021 ◽  
Author(s):  
Shikha Suman ◽  
Ashutosh Karna ◽  
Karina Gibert

Hierarchical clustering is one of the most preferred choices to understand the underlying structure of a dataset and defining typologies, with multiple applications in real life. Among the existing clustering algorithms, the hierarchical family is one of the most popular, as it permits to understand the inner structure of the dataset and find the number of clusters as an output, unlike popular methods, like k-means. One can adjust the granularity of final clustering to the goals of the analysis themselves. The number of clusters in a hierarchical method relies on the analysis of the resulting dendrogram itself. Experts have criteria to visually inspect the dendrogram and determine the number of clusters. Finding automatic criteria to imitate experts in this task is still an open problem. But, dependence on the expert to cut the tree represents a limitation in real applications like the fields industry 4.0 and additive manufacturing. This paper analyses several cluster validity indexes in the context of determining the suitable number of clusters in hierarchical clustering. A new Cluster Validity Index (CVI) is proposed such that it properly catches the implicit criteria used by experts when analyzing dendrograms. The proposal has been applied on a range of datasets and validated against experts ground-truth overcoming the results obtained by the State of the Art and also significantly reduces the computational cost.



Author(s):  
M. Abdelaziz ◽  
M. Elsayed

<p><strong>Abstract.</strong> Underwater photogrammetry in archaeology in Egypt is a completely new experience applied for the first time on the submerged archaeological site of the lighthouse of Alexandria situated on the eastern extremity of the ancient island of Pharos at the foot of Qaitbay Fort at a depth of 2 to 9 metres. In 2009/2010, the CEAlex launched a 3D photogrammetry data-gathering programme for the virtual reassembly of broken artefacts. In 2013 and the beginning of 2014, with the support of the Honor Frost Foundation, methods were developed and refined to acquire manual photographic data of the entire underwater site of Qaitbay using a DSLR camera, simple and low cost materials to obtain a digital surface model (DSM) of the submerged site of the lighthouse, and also to create 3D models of the objects themselves, such as statues, bases of statues and architectural elements. In this paper we present the methodology used for underwater data acquisition, data processing and modelling in order to generate a DSM of the submerged site of Alexandria’s ancient lighthouse. Until 2016, only about 7200&amp;thinsp;m<sup>2</sup> of the submerged site, which exceeds more than 13000&amp;thinsp;m<sup>2</sup>, was covered. One of our main objectives in this project is to georeference the site since this would allow for a very precise 3D model and for correcting the orientation of the site as regards the real-world space.</p>



Sign in / Sign up

Export Citation Format

Share Document