Automated and adaptive geometry preparation for ar/vr-applications

Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.

2021 ◽  
Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined with respect to their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality and the level of detail can be controlled by the automated choice of transformation parameters. We present a software tool in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details and the creation of UV maps. Flexibility, transformation quality and time savings are described and discussed.


2018 ◽  
Vol 11 (11) ◽  
pp. 6203-6230 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Andrew P. Morse ◽  
Martin W. Gallagher

Abstract. Primary biological aerosol including bacteria, fungal spores and pollen have important implications for public health and the environment. Such particles may have different concentrations of chemical fluorophores and will respond differently in the presence of ultraviolet light, potentially allowing for different types of biological aerosol to be discriminated. Development of ultraviolet light induced fluorescence (UV-LIF) instruments such as the Wideband Integrated Bioaerosol Sensor (WIBS) has allowed for size, morphology and fluorescence measurements to be collected in real-time. However, it is unclear without studying instrument responses in the laboratory, the extent to which different types of particles can be discriminated. Collection of laboratory data is vital to validate any approach used to analyse data and ensure that the data available is utilized as effectively as possible. In this paper a variety of methodologies are tested on a range of particles collected in the laboratory. Hierarchical agglomerative clustering (HAC) has been previously applied to UV-LIF data in a number of studies and is tested alongside other algorithms that could be used to solve the classification problem: Density Based Spectral Clustering and Noise (DBSCAN), k-means and gradient boosting. Whilst HAC was able to effectively discriminate between reference narrow-size distribution PSL particles, yielding a classification error of only 1.8 %, similar results were not obtained when testing on laboratory generated aerosol where the classification error was found to be between 11.5 % and 24.2 %. Furthermore, there is a large uncertainty in this approach in terms of the data preparation and the cluster index used, and we were unable to attain consistent results across the different sets of laboratory generated aerosol tested. The lowest classification errors were obtained using gradient boosting, where the misclassification rate was between 4.38 % and 5.42 %. The largest contribution to the error, in the case of the higher misclassification rate, was the pollen samples where 28.5 % of the samples were incorrectly classified as fungal spores. The technique was robust to changes in data preparation provided a fluorescent threshold was applied to the data. In the event that laboratory training data are unavailable, DBSCAN was found to be a potential alternative to HAC. In the case of one of the data sets where 22.9 % of the data were left unclassified we were able to produce three distinct clusters obtaining a classification error of only 1.42 % on the classified data. These results could not be replicated for the other data set where 26.8 % of the data were not classified and a classification error of 13.8 % was obtained. This method, like HAC, also appeared to be heavily dependent on data preparation, requiring a different selection of parameters depending on the preparation used. Further analysis will also be required to confirm our selection of the parameters when using this method on ambient data. There is a clear need for the collection of additional laboratory generated aerosol to improve interpretation of current databases and to aid in the analysis of data collected from an ambient environment. New instruments with a greater resolution are likely to improve on current discrimination between pollen, bacteria and fungal spores and even between different species, however the need for extensive laboratory data sets will grow as a result.


2015 ◽  
Vol 56 (1) ◽  
pp. 59-83
Author(s):  
Dafydd Gibbon ◽  
Katarzyna Klessa ◽  
Jolanta Bachan

AbstractThe study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory), syntagmatic (compositional) and functional (discourse-oriented) dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis), others are discussed in a critical light (e.g. so-called ‘rhythm metrics’). A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.


BMC Genomics ◽  
2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Kang-Jian Hua ◽  
Bin-Guang Ma

Abstract Background More and more 3C/Hi-C experiments on prokaryotes have been published. However, most of the published modeling tools for chromosome 3D structures are targeting at eukaryotes. How to transform prokaryotic experimental chromosome interaction data into spatial structure models is an important task and in great need. Results We have developed a new reconstruction program for bacterial chromosome 3D structure models called EVR that exploits a simple Error-Vector Resultant (EVR) algorithm. This software tool is particularly optimized for the closed-loop structural features of prokaryotic chromosomes. The parallel implementation of the program can utilize the computing power of both multi-core CPUs and GPUs. Conclusions EVR can be used to reconstruct the bacterial 3D chromosome structure based on the contact frequency matrix derived from 3C/Hi-C experimental data quickly and precisely.


2020 ◽  
pp. 107699862097855
Author(s):  
Takashi Yamashita ◽  
Thomas J. Smith ◽  
Phyllis A. Cummins

In order to promote the use of increasingly available large-scale assessment data in education and expand the scope of analytic capabilities among applied researchers, this study provides step-by-step guidance, and practical examples of syntax and data analysis using Mplus. Concise overview and key unique aspects of large-scale assessment data from the 2012/2014 Program for International Assessment of Adult Competencies (PIAAC) are described. Using commonly-used statistical software including SAS and R, a simple macro program and syntax are developed to streamline the data preparation process. Then, two examples of structural equation models are demonstrated using Mplus. The suggested data preparation and analytic approaches can be immediately applicable to existing large-scale assessment data.


1987 ◽  
Vol 2 (1) ◽  
pp. 55-63 ◽  
Author(s):  
Brian P. Bloomfield

AbstractThis paper examines the claim that machine induction can alleviate the current knowledge engineering bottleneck in expert system construction. It presents a case study of the rule induction software tool known as Expert-Ease and proposes a set of criteria which might guide the selection of appropriate domains.


2005 ◽  
Vol 60 (5-6) ◽  
pp. 459-466 ◽  
Author(s):  
Rüdiger Kaspera ◽  
Ulrich Krings ◽  
Michael Pescheck ◽  
Dieter Sell ◽  
Jens Schrader ◽  
...  

Selective transformations of limonene by asco- and basidiomycetes were investigated. On the shake flask scale, Penicillium citrinum hydrated R-(+)-limonene to α-terpineol [83% regioselectivity (rs), more than 80 mg 1-1 product yield], and Gongronella butleri catalysed the terminal oxidation to yield perillyl alcohol (60% rs, 16 mg 1-1). On the laboratory bioreactor scale, Penicillium digitatum produced a peak concentration of 506 mg α-terpineol 1-1 in the fed-batch mode, equivalent to a theoretical yield of 67%, and no volatile by-products were found. Fusarium proliferatum transformed R-(+)-limonene enantiospecifically to cis-(+)- carveol (98.6% ee, more than 35 mg 1-1 product yield) and S-(-)-limonene predominantly to trans-(-)-carveol (96.3% ee). Pleurotus sapidus selectively dehydrogenised the accumulating trans-(-)-carveol to the corresponding enantiopure R-(-)-carvone. The results show that a careful selection of strain and bioprocess parameters may improve both the yield and the optical purity of a desired product.


Sign in / Sign up

Export Citation Format

Share Document