scholarly journals A partition-enhanced least-squares collocation approach (PE-LSC)

2021 ◽  
Vol 95 (8) ◽  
Author(s):  
P. Zingerle ◽  
R. Pail ◽  
M. Willberg ◽  
M. Scheinert

AbstractWe present a partition-enhanced least-squares collocation (PE-LSC) which comprises several modifications to the classical LSC method. It is our goal to circumvent various problems of the practical application of LSC. While these investigations are focused on the modeling of the exterior gravity field the elaborated methods can also be used in other applications. One of the main drawbacks and current limitations of LSC is its high computational cost which grows cubically with the number of observation points. A common way to mitigate this problem is to tile the target area into sub-regions and solve each tile individually. This procedure assumes a certain locality of the LSC kernel functions which is generally not given and, therefore, results in fringe effects. To avoid this, it is proposed to localize the LSC kernels such that locality is preserved, and the estimated variances are not notably increased in comparison with the classical LSC method. Using global covariance models involves the calculation of a large number of Legendre polynomials which is usually a time-consuming task. Hence, to accelerate the creation of the covariance matrices, as an intermediate step we pre-calculate the covariance function on a two-dimensional grid of isotropic coordinates. Based on this grid, and under the assumption that the covariances are sufficiently smooth, the final covariance matrices are then obtained by a simple and fast interpolation algorithm. Applying the generalized multi-variate chain rule, also cross-covariance matrices among arbitrary linear spherical harmonic functionals can be obtained by this technique. Together with some further minor alterations these modifications are implemented in the PE-LSC method. The new PE-LSC is tested using selected data sets in Antarctica where altogether more than 800,000 observations are available for processing. In this case, PE-LSC yields a speed-up of computation time by a factor of about 55 (i.e., the computation needs only hours instead of weeks) in comparison with the classical unpartitioned LSC. Likewise, the memory requirement is reduced by a factor of about 360 (i.e., allocating memory in the order of GB instead of TB).

Author(s):  
В.А. Беляев

Исследованы возможности численного метода коллокации и наименьших квадратов (КНК) на примерах кусочно-полиномиального решения задачи Дирихле для уравнений Пуассона и типа диффузии-конвекции с особенностями в виде больших градиентов и разрыва решения на границах раздела двух подобластей. Предложены и реализованы новые hp-варианты метода КНК, основанные на присоединении внутри области малых и/или вытянутых нерегулярных ячеек, отсекаемых криволинейной границей раздела от исходных прямоугольных ячеек сетки, к соседним самостоятельным ячейкам. Выписываются с учетом особенности условия согласования между собой кусков решения в ячейках, примыкающих с разных сторон к границе раздела. Проведено сравнение результатов, полученных методом КНК и другими высокоточными методами. Показаны преимущества и достоинства метода КНК. Для ускорения итерационного процесса применены современные алгоритмы и методы: предобуславливание; свойства локальной системы координат в методе КНК; ускорение, основанное на подпространствах Крылова; операция продолжения на многосеточном комплексе; распараллеливание. Исследовано влияние этих способов на количество итераций и время расчетов при аппроксимации полиномами различных степеней. The capabilities of the numerical least-squares collocation (LSC) method of the piecewise polynomial solution of the Dirichlet problem for the Poisson and diffusion-convection equations are investigated. Examples of problems with singularities such as large gradients and discontinuity of the solution at interfaces between two subdomains are considered. New hp-versions of the LSC method based on the merging of small and/or elongated irregular cells to neighboring independent cells inside the domain are proposed and implemented. They cut off by a curvilinear interface from the original rectangular grid cells. Taking into account the problem singularity the matching conditions between the pieces of the solution in cells adjacent from different sides to the interface are written out. The results obtained by the LSC method are compared with other high-accuracy methods. Advantages of the LSC method are shown. For acceleration of an iterative process modern algorithms and methods are applied: preconditioning, properties of the local coordinate system in the LSC method, Krylov subspaces; prolongation operation on a multigrid complex; parallelization. The influence of these methods on iteration numbers and computation time at approximation by polynomials of various degrees is investigated.


Geophysics ◽  
2020 ◽  
pp. 1-61
Author(s):  
Janaki Vamaraju ◽  
Jeremy Vila ◽  
Mauricio Araya-Polo ◽  
Debanjan Datta ◽  
Mohamed Sidahmed ◽  
...  

Migration techniques are an integral part of seismic imaging workflows. Least-squares reverse time migration (LSRTM) overcomes some of the shortcomings of conventional migration algorithms by compensating for illumination and removing sampling artifacts to increase spatial resolution. However, the computational cost associated with iterative LSRTM is high and convergence can be slow in complex media. We implement pre-stack LSRTM in a deep learning framework and adopt strategies from the data science domain to accelerate convergence. The proposed hybrid framework leverages the existing physics-based models and machine learning optimizers to achieve better and cheaper solutions. Using a time-domain formulation, we show that mini-batch gradients can reduce the computation cost by using a subset of total shots for each iteration. Mini-batch approach does not only reduce source cross-talk but also is less memory intensive. Combining mini-batch gradients with deep learning optimizers and loss functions can improve the efficiency of LSRTM. Deep learning optimizers such as the adaptive moment estimation are generally well suited for noisy and sparse data. We compare different optimizers and demonstrate their efficacy in mitigating migration artifacts. To accelerate the inversion, we adopt the regularised Huber loss function in conjunction. We apply these techniques to 2D Marmousi and 3D SEG/EAGE salt models and show improvements over conventional LSRTM baselines. The proposed approach achieves higher spatial resolution in less computation time measured by various qualitative and quantitative evaluation metrics.


2021 ◽  
Vol 16 (4) ◽  
pp. 251-260
Author(s):  
Marcos Vinicius de Oliveira Peres ◽  
Ricardo Puziol de Oliveira ◽  
Edson Zangiacomi Martinez ◽  
Jorge Alberto Achcar

In this paper, we order to evaluate via Monte Carlo simulations the performance of sample properties of the estimates of the estimates for Sushila distribution, introduced by Shanker et al. (2013). We consider estimates obtained by six estimation methods, the known approaches of maximum likelihood, moments and Bayesian method, and other less traditional methods: L-moments, ordinary least-squares and weighted least-squares. As a comparison criterion, the biases and the roots of mean-squared errors were used through nine scenarios with samples ranging from 30 to 300 (every 30rd). In addition, we also considered a simulation and a real data application to illustrate the applicability of the proposed estimators as well as the computation time to get the estimates. In this case, the Bayesian method was also considered. The aim of the study was to find an estimation method to be considered as a better alternative or at least interchangeable with the traditional maximum likelihood method considering small or large sample sizes and with low computational cost.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Peng Fangfang ◽  
Sun Shuli

This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Maroua Said ◽  
Okba Taouali

We suggest in this article a dynamic reduced algorithm in order to enhance the monitoring abilities of nonlinear processes. Dynamic fault detection using data-driven methods is among the key technologies, which shows its ability to improve the performance of dynamic systems. Among the data-driven techniques, we find the kernel partial least squares (KPLS) which is presented as an interesting method for fault detection and monitoring in industrial systems. The dynamic reduced KPLS method is proposed for the fault detection procedure in order to use the advantages of the reduced KPLS models in online mode. Furthermore, the suggested method is developed to monitor the time-varying dynamic system and also update the model of reduced reference. The reduced model is used to minimize the computational cost and time and also to choose a reduced set of kernel functions. Indeed, the dynamic reduced KPLS allows adaptation of the reduced model, observation by observation, without the risk of losing or deleting important information. For each observation, the update of the model is available if and only if a further normal observation that contains new pertinent information is present. The general principle is to take only the normal and the important new observation in the feature space. Then the reduced set is built for the fault detection in the online phase based on a quadratic prediction error chart. Thereafter, the Tennessee Eastman process and air quality are used to precise the performances of the suggested methods. The simulation results of the dynamic reduced KPLS method are compared with the standard one.


2021 ◽  
Vol 1715 ◽  
pp. 012029
Author(s):  
Sergey Golushko ◽  
Vasily Shapeev ◽  
Vasily Belyaev ◽  
Luka Bryndin ◽  
Artem Boltaev ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
pp. 53-61
Author(s):  
E. Mysen

AbstractA network of pointwise available height anomalies, derived from levelling and GPS observations, can be densified by adjusting a gravimetric quasigeoid using least-squares collocation. The resulting type of Corrector Surface Model (CSM) is applied by Norwegian surveyors to convert ellipsoidal heights to normal heights expressed in the official height system NN2000. In this work, the uncertainty related to the use of a CSM to predict differences in height anomaly was sought. As previously, the application of variograms to determine the local statistical properties of the adopted collocation model led to predictions that were consistent with their computed uncertainties. For the purpose of predicting height anomaly differences, the effect of collocation was seen to be moderate in general for the small spatial separations considered (< 10 km). However, the relative impact of collocation could be appreciable, and increasing with distance, near the network. At last, it was argued that conservative uncertainties of height anomaly differences may be obtained by rescaling output of a grid interpolation by \sqrt \Delta, where Δ is the spatial separation of the two locations for which the difference is sought.


2013 ◽  
Vol 694-697 ◽  
pp. 2545-2549 ◽  
Author(s):  
Qian Wen Cheng ◽  
Lu Ben Zhang ◽  
Hong Hua Chen

The key point researched by many scholars in the field of surveying and mapping is how to use the given geodetic height H measured by GPS to obtain the normal height. Although many commonly-used fitting methods have solved many problems, they all value the pending parameters as the nonrandom variables. Figuring out the best valuations, according to the traditional least square principle, only considers its trend or randomness, which is theoretically incomprehensive and have limitations in practice. Therefore, a method is needed not only considers its trend but also takes randomness into account. This method is called the least squares collocation.


Sign in / Sign up

Export Citation Format

Share Document