Research and Realization of Highly Accurate Data Generation Based on Channel Simulator

2012 ◽  
Vol 588-589 ◽  
pp. 1316-1319
Author(s):  
Zhe Zheng ◽  
Li Hong Lv ◽  
Jie Jiang ◽  
Yang Zhou

With high accuracy, the channel simulator plays an important role in the docking experiment between the ground and the responder beacon. To begin with, this paper introduces the data generation algorithm including the data generation based on simulation technology, the principle of the linear least squares algorithm and then proposes the least squares quadratic spline method to generate highly accurate data in this channel simulator. Secondly, this paper introduces the system design to realize the data generation. Finally, a case which studies the approximation and an error analysis of the data generation algorithm is realized. The algorithm is considered to be accurate and easy to get the source data. The core of the algorithm is using data from Satellite Tool Kit to generate distance and speed sequence, and using the least squares to approximate real data and quadratic spline to fit for obtaining highly accurate data.

Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. V1-V11 ◽  
Author(s):  
Amr Ibrahim ◽  
Mauricio D. Sacchi

We adopted the robust Radon transform to eliminate erratic incoherent noise that arises in common receiver gathers when simultaneous source data are acquired. The proposed robust Radon transform was posed as an inverse problem using an [Formula: see text] misfit that is not sensitive to erratic noise. The latter permitted us to design Radon algorithms that are capable of eliminating incoherent noise in common receiver gathers. We also compared nonrobust and robust Radon transforms that are implemented via a quadratic ([Formula: see text]) or a sparse ([Formula: see text]) penalty term in the cost function. The results demonstrated the importance of incorporating a robust misfit functional in the Radon transform to cope with simultaneous source interferences. Synthetic and real data examples proved that the robust Radon transform produces more accurate data estimates than least-squares and sparse Radon transforms.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 110-115 ◽  
Author(s):  
Rand R. Wilcox ◽  
Jinxia Ma

Abstract. The paper compares methods that allow both within group and between group heteroscedasticity when performing all pairwise comparisons of the least squares lines associated with J independent groups. The methods are based on simple extension of results derived by Johansen (1980) and Welch (1938) in conjunction with the HC3 and HC4 estimators. The probability of one or more Type I errors is controlled using the improvement on the Bonferroni method derived by Hochberg (1988) . Results are illustrated using data from the Well Elderly 2 study, which motivated this paper.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


1994 ◽  
Vol 21 (6) ◽  
pp. 1074-1080 ◽  
Author(s):  
J. Llamas ◽  
C. Diaz Delgado ◽  
M.-L. Lavertu

In this paper, an improved probabilistic method for flood analysis using the probable maximum flood, the beta function, and orthogonal Jacobi’s polynomials is proposed. The shape of the beta function depends on the sample's characteristics and the bounds of the phenomenon. On the other hand, a serial of Jacobi’s polynomials has been used improving the beta function and increasing its convergence degree toward the real flood probability density function. This mathematical model has been tested using a sample of 1000 generated beta random data. Finally, some practical applications with real data series, from important Quebec's rivers, have been performed; the model solutions for these rivers showed the accuracy of this new method in flood frequency estimation. Key words: probable maximum flood, beta function, orthogonal polynomials, distribution function, flood frequency estimation, data generation, convergency.


2021 ◽  
Author(s):  
Yonghua Huo ◽  
Yingjun Shang ◽  
Bo Xu ◽  
Yuting Li ◽  
Yang Yang

Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. V345-V357 ◽  
Author(s):  
Nasser Kazemi

Given the noise-corrupted seismic recordings, blind deconvolution simultaneously solves for the reflectivity series and the wavelet. Blind deconvolution can be formulated as a fully perturbed linear regression model and solved by the total least-squares (TLS) algorithm. However, this algorithm performs poorly when the data matrix is a structured matrix and ill-conditioned. In blind deconvolution, the data matrix has a Toeplitz structure and is ill-conditioned. Accordingly, we develop a fully automatic single-channel blind-deconvolution algorithm to improve the performance of the TLS method. The proposed algorithm, called Toeplitz-structured sparse TLS, has no assumptions about the phase of the wavelet. However, it assumes that the reflectivity series is sparse. In addition, to reduce the model space and the number of unknowns, the algorithm benefits from the structural constraints on the data matrix. Our algorithm is an alternating minimization method and uses a generalized cross validation function to define the optimum regularization parameter automatically. Because the generalized cross validation function does not require any prior information about the noise level of the data, our approach is suitable for real-world applications. We validate the proposed technique using synthetic examples. In noise-free data, we achieve a near-optimal recovery of the wavelet and the reflectivity series. For noise-corrupted data with a moderate signal-to-noise ratio (S/N), we found that the algorithm successfully accounts for the noise in its model, resulting in a satisfactory performance. However, the results deteriorate as the S/N and the sparsity level of the data are decreased. We also successfully apply the algorithm to real data. The real-data examples come from 2D and 3D data sets of the Teapot Dome seismic survey.


Sign in / Sign up

Export Citation Format

Share Document