Semi-Automated Roller Parameters Extraction from Terrestrial Lidar

2021 ◽  
Vol 87 (12) ◽  
pp. 879-890
Author(s):  
Sagar S. Deshpande ◽  
Mike Falk ◽  
Nathan Plooster

Rollers are an integral part of a hot-rolling steel mill. They transport hot metal from one end of the mill to another. The quality of the steel highly depends on the surface quality of the rollers. This paper presents semi-automated methodologies to extract roller parameters from terrestrial lidar points. The procedure was divided into two steps. First, the three-dimensional points were converted to a two-dimensional image to detect the extents of the rollers using fast Fourier transform image matching. Lidar points for every roller were iteratively fitted to a circle. The radius and center of the fitted circle were considered as the average radius and average rotation axis of the roller, respectively. These parameters were also extracted manually and were compared to the measured parameters for accuracy analysis. The proposed methodology was able to extract roller parameters at millimeter level. Erroneously identified rollers were identified by moving average filters. In the second step, roller parameters were determined using the filtered roller points. Two data sets were used to validate the proposed methodologies. In the first data set, 366 out of 372 rollers (97.3%) were identified and modeled. The second, smaller data set consisted of 18 rollers which were identified and modelled accurately.

2019 ◽  
Vol 44 (7) ◽  
pp. 738-744
Author(s):  
Isabel Graul ◽  
Ivan Marintschev ◽  
Sascha Rausch ◽  
Niklas Eckart ◽  
Gunther O. Hofmann ◽  
...  

Different multiplanar reformation (MPR-512 and -256) algorithms of intraoperative acquired 3-D-fluoroscopy data exist without recommendations for use in the literature. To compare algorithms, 3-D-fluoroscopic data sets of 46 radius fractures were blinded and processed using MPR-256 and -512 (Ziehm, Vision-Vario 3D). Each reformatted data set was analysed to evaluate image quality, fracture reduction quality and screw misplacements. Overall image quality was higher rated in the MPR-512 compared with the MPR-256 (3.2 vs. 2.2 points, scale 1–5 points), accompanied by a reduced number of scans that could not be analysed (10 vs. 19%). Interobserver evaluation of fracture reduction quality was fair to moderate (independent of the algorithm). In contrast, for screw misplacements MPR-depended ratings were found (MPR-256: fair to moderate; MPR-512: moderate to substantial). Optimization of post-processing algorithms, rather than modifications of image acquisition, may increase the image quality for assessing implant positioning, but limitations in evaluating fracture reduction quality still exist.


Author(s):  
S. S. Deshpande ◽  
M. Falk ◽  
N. Plooster

Abstract. Terrestrial lidar scanners are increasingly being used in numerous indoor mapping applications. This paper presents a methodology to model rollers used in hot-rolling steel mills. Hot-rolling steel mills are large facilities where steel is processed to different shapes. In a steel sheet manufacturing process, a steel slab is reheated at one end of the mill and is passed through multiple presses to achieve the desired cross-section. Hundreds of steel rollers are used to transport the steel slab from one end of the mill to the other. Over a period of use, these rollers wore out and need replacement. Manual determination of the damage to the rollers is a time-consuming task. Moreover, manual measurements can be influenced by the operator’s judgment. This paper presents a methodology to model rollers in a hot-rolling steel mill using lidar points. A terrestrial lidar scanner was used to collect lidar points over the roller surfaces. Data from several stations were merged to create a single point cloud. Using a bounding box, lidar points on all the rollers were clipped and used in this paper. The clipped data consisted of the roller as well as outlier points. Depending on the scan angles of scanner stations, partial surfaces of the rollers were scanned. A right-handed coordinate frame was used where the X-axis passed through the centers of all the rollers, Y-axis was parallel to the length of the first roller, and the Z-axis was in the plumb direction. Using a standard diameter of the roller, model roller points were created to extract the rollers. Both the lidar data and the model points were converted to rectangular prism-shaped voxels of dimensions 15.24 mm (0.05 ft) × 15.24 mm in the X and Z directions and extending over the entire width of the roller in the Y-direction. Voxels containing at least 40 lidar points were considered valid. Binary images of both the lidar points and the model points were created in the X-Z axes using the valid voxels. The roller locations in the lidar image were located by performing 2D FFT image matching using the model roller image. The roller points at the shortlisted locations were fitted with a circle equation to determine the mean roller diameters and mean center locations (roller’s rotation axis). The outlier points were filtered in this process for each roller. The elevation at the top of every roller was determined by adding their radii and Z-coordinates of its centers. Incorrectly located and/or modeled rollers were identified by implementing moving-average filters. Positively identified roller points were further analyzed to determine surface erosions and tilts. The above methodology showed that the rollers can be effectively modeled using the lidar points.


2011 ◽  
pp. 24-32 ◽  
Author(s):  
Nicoleta Rogovschi ◽  
Mustapha Lebbah ◽  
Younès Bennani

Most traditional clustering algorithms are limited to handle data sets that contain either continuous or categorical variables. However data sets with mixed types of variables are commonly used in data mining field. In this paper we introduce a weighted self-organizing map for clustering, analysis and visualization mixed data (continuous/binary). The learning of weights and prototypes is done in a simultaneous manner assuring an optimized data clustering. More variables has a high weight, more the clustering algorithm will take into account the informations transmitted by these variables. The learning of these topological maps is combined with a weighting process of different variables by computing weights which influence the quality of clustering. We illustrate the power of this method with data sets taken from a public data set repository: a handwritten digit data set, Zoo data set and other three mixed data sets. The results show a good quality of the topological ordering and homogenous clustering.


2017 ◽  
Vol 6 (3) ◽  
pp. 71 ◽  
Author(s):  
Claudio Parente ◽  
Massimiliano Pepe

The purpose of this paper is to investigate the impact of weights in pan-sharpening methods applied to satellite images. Indeed, different data sets of weights have been considered and compared in the IHS and Brovey methods. The first dataset contains the same weight for each band while the second takes in account the weighs obtained by spectral radiance response; these two data sets are most common in pan-sharpening application. The third data set is resulting by a new method. It consists to compute the inertial moment of first order of each band taking in account the spectral response. For testing the impact of the weights of the different data sets, WorlView-3 satellite images have been considered. In particular, two different scenes (the first in urban landscape, the latter in rural landscape) have been investigated. The quality of pan-sharpened images has been analysed by three different quality indexes: Root mean square error (RMSE), Relative average spectral error (RASE) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS).


2005 ◽  
Vol 5 (7) ◽  
pp. 1835-1841 ◽  
Author(s):  
S. Noël ◽  
M. Buchwitz ◽  
H. Bovensmann ◽  
J. P. Burrows

Abstract. A first validation of water vapour total column amounts derived from measurements of the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) in the visible spectral region has been performed. For this purpose, SCIAMACHY water vapour data have been determined for the year 2003 using an extended version of the Differential Optical Absorption Spectroscopy (DOAS) method, called Air Mass Corrected (AMC-DOAS). The SCIAMACHY results are compared with corresponding water vapour measurements by the Special Sensor Microwave Imager (SSM/I) and with model data from the European Centre for Medium-Range Weather Forecasts (ECMWF). In confirmation of previous results it could be shown that SCIAMACHY derived water vapour columns are typically slightly lower than both SSM/I and ECMWF data, especially over ocean areas. However, these deviations are much smaller than the observed scatter of the data which is caused by the different temporal and spatial sampling and resolution of the data sets. For example, the overall difference with ECMWF data is only -0.05 g/cm2 whereas the typical scatter is in the order of 0.5 g/cm2. Both values show almost no variation over the year. In addition, first monthly means of SCIAMACHY water vapour data have been computed. The quality of these monthly means is currently limited by the availability of calibrated SCIAMACHY spectra. Nevertheless, first comparisons with ECMWF data show that SCIAMACHY (and similar instruments) are able to provide a new independent global water vapour data set.


Author(s):  
MUSTAPHA LEBBAH ◽  
YOUNÈS BENNANI ◽  
NICOLETA ROGOVSCHI

This paper introduces a probabilistic self-organizing map for topographic clustering, analysis and visualization of multivariate binary data or categorical data using binary coding. We propose a probabilistic formalism dedicated to binary data in which cells are represented by a Bernoulli distribution. Each cell is characterized by a prototype with the same binary coding as used in the data space and the probability of being different from this prototype. The learning algorithm, Bernoulli on self-organizing map, that we propose is an application of the EM standard algorithm. We illustrate the power of this method with six data sets taken from a public data set repository. The results show a good quality of the topological ordering and homogenous clustering.


2016 ◽  
Vol 25 (3) ◽  
pp. 431-440 ◽  
Author(s):  
Archana Purwar ◽  
Sandeep Kumar Singh

AbstractThe quality of data is an important task in the data mining. The validity of mining algorithms is reduced if data is not of good quality. The quality of data can be assessed in terms of missing values (MV) as well as noise present in the data set. Various imputation techniques have been studied in MV study, but little attention has been given on noise in earlier work. Moreover, to the best of knowledge, no one has used density-based spatial clustering of applications with noise (DBSCAN) clustering for MV imputation. This paper proposes a novel technique density-based imputation (DBSCANI) built on density-based clustering to deal with incomplete values in the presence of noise. Density-based clustering algorithm proposed by Kriegal groups the objects according to their density in spatial data bases. The high-density regions are known as clusters, and the low-density regions refer to the noise objects in the data set. A lot of experiments have been performed on the Iris data set from life science domain and Jain’s (2D) data set from shape data sets. The performance of the proposed method is evaluated using root mean square error (RMSE) as well as it is compared with existing K-means imputation (KMI). Results show that our method is more noise resistant than KMI on data sets used under study.


2016 ◽  
Author(s):  
Brecht Martens ◽  
Diego G. Miralles ◽  
Hans Lievens ◽  
Robin van der Schalie ◽  
Richard A. M. de Jeu ◽  
...  

Abstract. The Global Land Evaporation Amsterdam Model (GLEAM) is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data. Ever since its development in 2011, the model has been regularly revised aiming at the optimal incorporation of new satellite-observed geophysical variables, and improving the representation of physical processes. In this study, the next version of this model (v3) is presented. Key changes relative to the previous version include: (1) a revised formulation of the evaporative stress, (2) an optimized drainage algorithm, and (3) a new soil moisture data assimilation system. GLEAM v3 is used to produce three new data sets of terrestrial evaporation and root-zone soil moisture, including a 35-year data set spanning the period 1980–2014 (v3.0a, based on satellite-observed soil moisture, vegetation optical depth and snow water equivalents, reanalysis air temperature and radiation, and a multi-source precipitation product), and two fully satellite-based data sets. The latter two share most of their forcing, except for the vegetation optical depth and soil moisture products, which are based on observations from different passive and active C- and L-band microwave sensors (European Space Agency Climate Change Initiative data sets) for the first data set (v3.0b, spanning the period 2003–2015) and observations from the Soil Moisture and Ocean Salinity satellite in the second data set (v3.0c, spanning the period 2011–2015). These three data sets are described in detail, compared against analogous data sets generated using the previous version of GLEAM (v2), and validated against measurements from 64 eddy-covariance towers and 2338 soil moisture sensors across a broad range of ecosystems. Results indicate that the quality of the v3 soil moisture is consistently better than the one from v2: average correlations against in situ surface soil moisture measurements increase from 0.61 to 0.64 in case of the v3.0a data set and the representation of soil moisture in the second layer improves as well, with correlations increasing from 0.47 to 0.53. Similar improvements are observed for the two fully satellite-based data sets. Despite regional differences, the quality of the evaporation fluxes remains overall similar as the one obtained using the previous version of GLEAM, with average correlations against eddy-covariance measurements between 0.78 and 0.80 for the three different data sets. These global data sets of terrestrial evaporation and root-zone soil moisture are now openly available at http://GLEAM.eu and may be used for large-scale hydrological applications, climate studies and research on land-atmosphere feedbacks.


2021 ◽  
Author(s):  
Alexander K. Bartella ◽  
Josefine Laser ◽  
Mohammad Kamal ◽  
Dirk Halama ◽  
Michael Neuhaus ◽  
...  

Abstract Introduction: Three-dimensional facial scan images have been showing an increasingly important role in peri-therapeutic management of oral and maxillofacial and head and neck surgery cases. Face scan images can be open using optical facial scanners utilizing line-laser, stereophotography, structured light modality, or from volumetric data obtained from cone beam computed tomography (CBCT). The aim of this study is to evaluate, if two low-cost procedures for creating a three-dimensional face scan images are able to produce a sufficient data set for clinical analysis. Materials and methods: 50 healthy volunteers were included in the study. Two test objects with defined dimensions were attached to the forehead and the left cheek. Anthropometric values were first measured manually, and consecutively, face scans were performed with a smart device and manual photogrammetry and compared to the manually measured data sets.Results: Anthropometric distances on average deviated 2.17 mm from the manual measurement (smart device scanning 3.01 mm vs. photogrammetry 1.34 mm), with 7 out of 8 deviations were statistically significant. Of a total of 32 angles, 19 values showed a significant difference to the original 90° angles. The average deviation was 6.5° (smart device scanning 10.1° vs. photogrammetry 2.8°).Conclusion: Manual photogrammetry with a regular photo-camera shows higher accuracy than scanning with smart device. However, the smart device was more intuitive in handling and further technical improvement of the cameras used should be watched carefully.


2019 ◽  
Author(s):  
Jacob Schreiber ◽  
Jeffrey Bilmes ◽  
William Stafford Noble

AbstractMotivationRecent efforts to describe the human epigenome have yielded thousands of uniformly processed epigenomic and transcriptomic data sets. These data sets characterize a rich variety of biological activity in hundreds of human cell lines and tissues (“biosamples”). Understanding these data sets, and specifically how they differ across biosamples, can help explain many cellular mechanisms, particularly those driving development and disease. However, due primarily to cost, the total number of assays that can be performed is limited. Previously described imputation approaches, such as Avocado, have sought to overcome this limitation by predicting genome-wide epigenomics experiments using learned associations among available epigenomic data sets. However, these previous imputations have focused primarily on measurements of histone modification and chromatin accessibility, despite other biological activity being crucially important.ResultsWe applied Avocado to a data set of 3,814 tracks of data derived from the ENCODE compendium, spanning 400 human biosamples and 84 assays. The resulting imputations cover measurements of chromatin accessibility, histone modification, transcription, and protein binding. We demonstrate the quality of these imputations by comprehensively evaluating the model’s predictions and by showing significant improvements in protein binding performance compared to the top models in an ENCODE-DREAM challenge. Additionally, we show that the Avocado model allows for efficient addition of new assays and biosamples to a pre-trained model, achieving high accuracy at predicting protein binding, even with only a single track of training data.AvailabilityTutorials and source code are available under an Apache 2.0 license at https://github.com/jmschrei/[email protected] or [email protected]


Sign in / Sign up

Export Citation Format

Share Document