scholarly journals 3D LEAST SQUARES MATCHING APPLIED TO MICRO-TOMOGRAPHY DATA

Author(s):  
F. Liebold ◽  
R. Lorenzoni ◽  
I. Curosu ◽  
F. Léonard ◽  
V. Mechtcherine ◽  
...  

Abstract. The paper introduces 3D least squares matching as a technique to analyze multi-temporal micro-tomography data in civil engineering material testing. Time series of tomography voxel data sets are recorded during an in-situ tension test of a strain-hardening cement-based composite probe at consecutive load steps. 3D least squares matching is a technique to track cuboids in consecutive voxel data sets minimizing the sum of the squares of voxel value differences after a 12-parameter 3D affine transformation. For a regular grid of locations in each voxel data set of the deformed states, a subvoxel-precise 3D displacement vector field is computed. Discontinuities in these displacement vector fields indicate the occurrence of cracks in the probes during the load tests. These cracks are detected and quantitatively described by the computation of principal strains of tetrahedrons in a tetrahedral mesh, that is generated between the matching points. The subvoxel-accuracy potential of the technique allows the detection of very small cracks with a width much smaller than the actual voxel size.

2006 ◽  
Vol 58 (4) ◽  
pp. 567-574 ◽  
Author(s):  
M.G.C.D. Peixoto ◽  
J.A.G. Bergmann ◽  
C.G. Fonseca ◽  
V.M. Penna ◽  
C.S. Pereira

Data on 1,294 superovulations of Brahman, Gyr, Guzerat and Nellore females were used to evaluate the effects of: breed; herd; year of birth; inbreeding coefficient and age at superovulation of the donor; month, season and year of superovulation; hormone source and dose; and the number of previous treatments on the superovulation results. Four data sets were considered to study the influence of donors’ elimination effect after each consecutive superovulation. Each one contained only records of the first, or of the two firsts, or three firsts or all superovulations. The average number of palpated corpora lutea per superovulation varied from 8.6 to 12.6. The total number of recovered structures and viable embryos ranged from 4.1 to 7.3 and from 7.3 to 13.8, respectively. Least squares means of the number of viable embryos at first superovulation were 7.8 ± 6.6 (Brahman), 3.7 ± 4.5 (Gyr), 6.1 ± 5.9 (Guzerat) and 5.2 ± 5.9 (Nellore). The numbers of viable embryos of the second and the third superovulations were not different from those of the first superovulation. The mean intervals between first and second superovulations were 91.8 days for Brahman, 101.8 days for Gyr, 93.1 days for Guzerat and 111.3 days for Nellore donors. Intervals between the second and the third superovulations were 134.3, 110.3, 116.4 and 108.5 days for Brahman, Gyr, Guzerat and Nellore donors, respectively. Effects of herd nested within breed and dose nested within hormone affected all traits. For some data sets, the effects of month and order of superovulation on three traits were importants. The maximum number of viable embryos was observed for 7-8 year-old donors. The best responses for corpora lutea and recovered structures were observed for 4-5 year-old donors. Inbreeding coefficient was positively associated to the number of recovered structures when data set on all superovulations was considered.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


2020 ◽  
Vol 633 ◽  
pp. A46
Author(s):  
L. Siltala ◽  
M. Granvik

Context. The bulk density of an asteroid informs us about its interior structure and composition. To constrain the bulk density, one needs an estimated mass of the asteroid. The mass is estimated by analyzing an asteroid’s gravitational interaction with another object, such as another asteroid during a close encounter. An estimate for the mass has typically been obtained with linearized least-squares methods, despite the fact that this family of methods is not able to properly describe non-Gaussian parameter distributions. In addition, the uncertainties reported for asteroid masses in the literature are sometimes inconsistent with each other and are suspected to be unrealistically low. Aims. We aim to present a Markov-chain Monte Carlo (MCMC) algorithm for the asteroid mass estimation problem based on asteroid-asteroid close encounters. We verify that our algorithm works correctly by applying it to synthetic data sets. We use astrometry available through the Minor Planet Center to estimate masses for a select few example cases and compare our results with results reported in the literature. Methods. Our mass-estimation method is based on the robust adaptive Metropolis algorithm that has been implemented into the OpenOrb asteroid orbit computation software. Our method has the built-in capability to analyze multiple perturbing asteroids and test asteroids simultaneously. Results. We find that our mass estimates for the synthetic data sets are fully consistent with the ground truth. The nominal masses for real example cases typically agree with the literature but tend to have greater uncertainties than what is reported in recent literature. Possible reasons for this include different astrometric data sets and weights, different test asteroids, different force models or different algorithms. For (16) Psyche, the target of NASA’s Psyche mission, our maximum likelihood mass is approximately 55% of what is reported in the literature. Such a low mass would imply that the bulk density is significantly lower than previously expected and thus disagrees with the theory of (16) Psyche being the metallic core of a protoplanet. We do, however, note that masses reported in recent literature remain within our 3-sigma limits. Results. The new MCMC mass-estimation algorithm performs as expected, but a rigorous comparison with results from a least-squares algorithm with the exact same data set remains to be done. The matters of uncertainties in comparison with other algorithms and correlations of observations also warrant further investigation.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Xun Chen ◽  
Aiping Liu ◽  
Z. Jane Wang ◽  
Hu Peng

Corticomuscular activity modeling based on multiple data sets such as electroencephalography (EEG) and electromyography (EMG) signals provides a useful tool for understanding human motor control systems. In this paper, we propose modeling corticomuscular activity by combining partial least squares (PLS) and canonical correlation analysis (CCA). The proposed method takes advantage of both PLS and CCA to ensure that the extracted components are maximally correlated across two data sets and meanwhile can well explain the information within each data set. This complementary combination generalizes the statistical assumptions beyond both PLS and CCA methods. Simulations were performed to illustrate the performance of the proposed method. We also applied the proposed method to concurrent EEG and EMG data collected in a Parkinson’s disease (PD) study. The results reveal several highly correlated temporal patterns between EEG and EMG signals and indicate meaningful corresponding spatial activation patterns. In PD subjects, enhanced connections between occipital region and other regions are noted, which is consistent with previous medical knowledge. The proposed framework is a promising technique for performing multisubject and bimodal data analysis.


2021 ◽  
Author(s):  
Tiziano Tirabassi ◽  
Daniela Buske

The recording of air pollution concentration values involves the measurement of a large volume of data. Generally, automatic selectors and explicators are provided by statistics. The use of the Representative Day allows the compilation of large amounts of data in a compact format that will supply meaningful information on the whole data set. The Representative Day (RD) is a real day that best represents (in the meaning of the least squares technique) the set of daily trends of the considered time series. The Least Representative Day (LRD), on the contrary, it is a real day that worst represents (in the meaning of the least squares technique) the set of daily trends of the same time series. The identification of RD and LRD can prove to be a very important tool for identifying both anomalous and standard behaviors of pollutants within the selected period and establishing measures of prevention, limitation and control. Two application examples, in two different areas, are presented related to meteorological and SO 2 and O 3 concentration data sets.


2016 ◽  
Vol 72 (2) ◽  
pp. 250-260 ◽  
Author(s):  
Bertrand Fournier ◽  
Jesse Sokolow ◽  
Philip Coppens

Two methods for scaling of multicrystal data collected in time-resolved photocrystallography experiments are discussed. The WLS method is based on a weighted least-squares refinement of laser-ON/laser-OFF intensity ratios. The other, previously applied, is based on the average absolute system response to light exposure. A more advanced application of these methods for scaling within a data set, necessary because of frequent anisotropy of light absorption in crystalline samples, is proposed. The methods are applied to recently collected synchrotron data on the tetra-nuclear compound Ag2Cu2L4withL= 2-diphenylphosphino-3-methylindole. A statistical analysis of the weighted least-squares refinement residual terms is performed to test the importance of the scaling procedure.


2018 ◽  
Vol 52 (25) ◽  
pp. 3523-3538 ◽  
Author(s):  
Andrew Ellison ◽  
Hyonny Kim

X-ray computed tomography has recently become an increasingly popular non-destructive imaging method in composites research. However, due to the complexity of 3D computed tomography data sets, it can be difficult to accurately and quantitatively assess the damage state of a composite structure without additional post-processing. A new segmentation procedure has been developed that takes a 3D computed tomography data set of an impacted composite laminate and separates internal damage into information about intraply and interlaminar damage within each ply and at each interface. Impacted flat T800/3900-2 unidirectional carbon/epoxy composite panels were scanned and then segmented to create comprehensible maps of internal damage states. Based on the types of data extracted by the developed computed tomography segmentation, techniques to input these datasets into numerical modeling have been developed. Additionally, various damage visualization and interpretation techniques made possible by the computed tomography segmentation have been explored.


2018 ◽  
Vol 7 (6) ◽  
pp. 33
Author(s):  
Morteza Marzjarani

Selecting a proper model for a data set is a challenging task. In this article, an attempt was made to answer and to find a suitable model for a given data set. A general linear model (GLM) was introduced along with three different methods for estimating the parameters of the model. The three estimation methods considered in this paper were ordinary least squares (OLS), generalized least squares (GLS), and feasible generalized least squares (FGLS). In the case of GLS, two different weights were selected for improving the severity of heteroscedasticity and the proper weight (s) was deployed. The third weight was selected through the application of FGLS. Analyses showed that only two of the three weights including the FGLS were effective in improving or reducing the severity of heteroscedasticity. In addition, each data set was divided into Training, Validation, and Testing producing a more reliable set of estimates for the parameters in the model. Partitioning data is a relatively new approach is statistics borrowed from the field of machine learning. Stepwise and forward selection methods along with a number of statistics including the average square error testing (ASE), Adj. R-Sq, AIC, AICC, and ASE validate along with proper hierarchies were deployed to select a more appropriate model(s) for a given data set. Furthermore, the response variable in both data files was transformed using the Box-Cox method to meet the assumption of normality. Analysis showed that the logarithmic transformation solved this issue in a satisfactory manner. Since the issues of heteroscedasticity, model selection, and partitioning of data have not been addressed in fisheries, for introduction and demonstration purposes only, the 2015 and 2016 shrimp data in the Gulf of Mexico (GOM) were selected and the above methods were applied to these data sets. At the conclusion, some variations of the GLM were identified as possible leading candidates for the above data sets.


2006 ◽  
Vol 82 (4) ◽  
pp. 463-468 ◽  
Author(s):  
N.P.P. Macciotta ◽  
C. Dimauro ◽  
N. Bacciu ◽  
P. Fresi ◽  
A. Cappio-Borlino

AbstractA model able to predict missing test day data for milk, fat and protein yields on the basis of few recorded tests was proposed, based on the partial least squares (PLS) regression technique, a multivariate method that is able to solve problems related to high collinearity among predictors. A data set of 1731 lactations of Sarda breed dairy Goats was split into two data sets, one for model estimation and the other for the evaluation of PLS prediction capability. Eight scenarios of simplified recording schemes for fat and protein yields were simulated. Correlations among predicted and observed test day yields were quite high (from 0·50 to 0·88 and from 0·53 to 0·96 for fat and protein yields, respectively, in the different scenarios). Results highlight great flexibility and accuracy of this multivariate technique.


Sign in / Sign up

Export Citation Format

Share Document