Analysis of Down Looking GPS Occultation Simulated Data Using Least Squares and Abel Inversions

Author(s):  
Ashraf El-Kutb Mousa ◽  
Toshitaka Tsuda
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


1992 ◽  
Vol 288 (2) ◽  
pp. 533-538 ◽  
Author(s):  
M E Jones

An algorithm for the least-squares estimation of enzyme parameters Km and Vmax. is proposed and its performance analysed. The problem is non-linear, but the algorithm is algebraic and does not require initial parameter estimates. On a spreadsheet program such as MINITAB, it may be coded in as few as ten instructions. The algorithm derives an intermediate estimate of Km and Vmax. appropriate to data with a constant coefficient of variation and then applies a single reweighting. Its performance using simulated data with a variety of error structures is compared with that of the classical reciprocal transforms and to both appropriately and inappropriately weighted direct least-squares estimators. Three approaches to estimating the standard errors of the parameter estimates are discussed, and one suitable for spreadsheet implementation is illustrated.


Author(s):  
Giuseppe De Luca ◽  
Jan R. Magnus

In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares estimator developed by Magnus, Powell, and Prüfer (2010, Journal of Econometrics 154: 139–153). Unlike standard pretest estimators that are based on some preliminary diagnostic test, these model-averaging estimators provide a coherent way of making inference on the regression parameters of interest by taking into account the uncertainty due to both the estimation and the model selection steps. Special emphasis is given to several practical issues that users are likely to face in applied work: equivariance to certain transformations of the explanatory variables, stability, accuracy, computing speed, and out-of-memory problems. Performances of our bma and wals commands are illustrated using simulated data and empirical applications from the literature on model-averaging estimation.


2014 ◽  
Vol 3 (2) ◽  
pp. 174
Author(s):  
Yaser Abdelhadi

Linear transformations are performed for selected exponential engineering functions. The Optimum values of parameters of the linear model equation that fits the set of experimental or simulated data points are determined by the linear least squares method. The classical and matrix forms of ordinary least squares are illustrated. Keywords: Exponential Functions; Linear Modeling; Ordinary Least Squares; Parametric Estimation; Regression Steps.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Ni Putu Ayu Mirah Mariati ◽  
I. Nyoman Budiantara ◽  
Vita Ratnasari

So far, most of the researchers developed one type of estimator in nonparametric regression. But in reality, in daily life, data with mixed patterns were often encountered, especially data patterns which partly changed at certain subintervals, and some others followed a recurring pattern in a certain trend. The estimator method used for the data pattern was a mixed estimator method of smoothing spline and Fourier series. This regression model was approached by the component smoothing spline and Fourier series. From this process, the mixed estimator was completed using two estimation stages. The first stage was the estimation with penalized least squares (PLS), and the second stage was the estimation with least squares (LS). Those estimators were then implemented using simulated data. The simulated data were gained by generating two different functions, namely, polynomial and trigonometric functions with the size of the sample being 100. The whole process was then repeated 50 times. The experiment of the two functions was modeled using a mixture of the smoothing spline and Fourier series estimators with various smoothing and oscillation parameters. The generalized cross validation (GCV) minimum was selected as the best model. The simulation results showed that the mixed estimators gave a minimum (GCV) value of 11.98. From the minimum GCV results, it was obtained that the mean square error (MSE) was 0.71 and R2 was 99.48%. So, the results obtained indicated that the model was good for a mixture estimator of smoothing spline and Fourier series.


1997 ◽  
Vol 19 (3) ◽  
pp. 195-208 ◽  
Author(s):  
Faouzi Kallel ◽  
Jonathan Ophir

A least-squares strain estimator (LSQSE) for elastography is proposed. It is shown that with such an estimator, the signal-to-noise ratio in an elastogram ( SNRe) is significantly improved. This improvement is illustrated theoretically using a modified strain filter and experimentally using a homogeneous gel phantom. It is demonstrated that the LSQSE results in an increase of the elastographic sensitivity (smallest strain that could be detected), thereby increasing the strain dynamic range. Using simulated data, it is shown that a tradeoff exists between the improvement in SNRe and the reduction of strain contrast and spatial resolution.


2002 ◽  
Vol 56 (5) ◽  
pp. 615-624 ◽  
Author(s):  
David K. Melgaard ◽  
David M. Haaland ◽  
Christine M. Wehlburg

A significant extension to the classical least-squares (CLS) algorithm called concentration residual augmented CLS (CRACLS) has been developed. Previously, unmodeled sources of spectral variation have rendered CLS models ineffective for most types of problems, but with the new CRACLS algorithm, CLS-type models can be applied to a significantly wider range of applications. This new quantitative multivariate spectral analysis algorithm iteratively augments the calibration matrix of reference concentrations with concentration residuals estimated during CLS prediction. Because these residuals represent linear combinations of the unmodeled spectrally active component concentrations, the effects of these components are removed from the calibration of the analytes of interest. This iterative process allows the development of a CLS-type calibration model comparable in prediction ability to implicit multivariate calibration methods such as partial least squares (PLS) even when unmodeled spectrally active components are present in the calibration sample spectra. In addition, CRACLS retains the improved qualitative spectral information of the CLS algorithm relative to PLS. More importantly, CRACLS provides a model compatible with the recently presented prediction-augmented CLS (PACLS) method. The CRACLS/PACLS combination generates an adaptable model that can achieve excellent prediction ability for samples of unknown composition that contain unmodeled sources of spectral variation. The CRACLS algorithm is demonstrated with both simulated and real data derived from a system of dilute aqueous solutions containing glucose, ethanol, and urea. The simulated data demonstrate the effectiveness of the new algorithm and help elucidate the principles behind the method. Using experimental data, we compare the prediction abilities of CRACLS and PLS during cross-validated calibration. In combination with PACLS, the CRACLS predictions are comparable to PLS for the prediction of the glucose, ethanol, and urea components for validation samples collected when significant instrument drift was present. However, the PLS predictions required recalibration using nonstandard cross-validated rotations while CRACLS/PACLS was rapidly updated during prediction without the need for time-consuming cross-validated recalibration. The CRACLS/PACLS algorithm provides a more general approach to removing the detrimental effects of unmodeled components.


1985 ◽  
Vol 39 (3) ◽  
pp. 463-470 ◽  
Author(s):  
Yong-Chien Ling ◽  
Thomas J. Vickers ◽  
Charles K. Mann

A study has been made to compare the effectiveness of thirteen methods of spectroscopic background correction in quantitative measurements. These include digital filters, least-squares fitting, and cross-correlation, as well as peak area and height measurements. Simulated data sets with varying S/N and degrees of background curvature were used. The results were compared with the results of corresponding treatments of Raman spectra of dimethyl sulfone, sulfate, and bisulfate. The range of variation of the simulated sets was greater than was possible with the experimental data, but where conditions were comparable, the agreement between them was good. This supports the conclusion that the simulations were valid. Best results were obtained by a least-squares fit with the use of simple polynomials to generate the background correction. Under the conditions employed, limits of detection were about 80 ppm for dimethyl sulfone and sulfate and 420 ppm for bisulfate.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Made Ayu Dwi Octavanny ◽  
I Nyoman Budiantara ◽  
Heri Kuswanto ◽  
Dyah Putri Rahmawati

We introduce a new method for estimating the nonparametric regression curve for longitudinal data. This method combines two estimators: truncated spline and Fourier series. This estimation is completed by minimizing the penalized weighted least squares and weighted least squares. This paper also provides the properties of the new mixed estimator, which are biased and linear in the observations. The best model is selected using the smallest value of generalized cross-validation. The performance of the new method is demonstrated by a simulation study with a variety of time points. Then, the proposed approach is applied to a stroke patient dataset. The results show that simulated data and real data yield consistent findings.


Author(s):  
Jean-Rémi King ◽  
François Charton ◽  
David Lopez-Paz ◽  
Maxime Oquab

AbstractIdentifying causes solely from observations can be particularly challenging when i) potential factors are difficult to manipulate independently and ii) observations are multi-dimensional. To address this issue, we introduce “Back-to-Back” regression (B2B), a linear method designed to efficiently measure, from a set of correlated factors, those that most plausibly account for multidimensional observations. First, we prove the consistency of B2B, its links to other linear approaches, and show how it provides a robust, unbiased and interpretable scalar estimate for each factor. Second, we use a variety of simulated data to show that B2B outperforms least-squares regression and cross-decomposition techniques (e.g. canonical correlation analysis and partial least squares) on causal identification when the factors and the observations are partially collinear. Finally, we apply B2B to magneto-encephalography of 102 subjects recorded during a reading task to test whether our method appropriately disentangles the respective contribution of word length and word frequency - two correlated factors known to cause early and late brain responses respectively. The results show that these two factors are better disentangled with B2B than with other standard techniques.


Sign in / Sign up

Export Citation Format

Share Document