scholarly journals Node Generation for RBF-FD Methods by QR Factorization

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1845
Author(s):  
Tony Liu ◽  
Rodrigo B. Platte

Polyharmonic spline (PHS) radial basis functions (RBFs) have been used in conjunction with polynomials to create RBF finite-difference (RBF-FD) methods. In 2D, these methods are usually implemented with Cartesian nodes, hexagonal nodes, or most commonly, quasi-uniformly distributed nodes generated through fast algorithms. We explore novel strategies for computing the placement of sampling points for RBF-FD methods in both 1D and 2D while investigating the benefits of using these points. The optimality of sampling points is determined by a novel piecewise-defined Lebesgue constant. Points are then sampled by modifying a simple, robust, column-pivoting QR algorithm previously implemented to find sets of near-optimal sampling points for polynomial approximation. Using the newly computed sampling points for these methods preserves accuracy while reducing computational costs by mitigating stencil size restrictions for RBF-FD methods. The novel algorithm can also be used to select boundary points to be used in conjunction with fast algorithms that provide quasi-uniformly distributed nodes.

2017 ◽  
Vol 39 (2) ◽  
pp. C96-C115 ◽  
Author(s):  
Per-Gunnar Martinsson ◽  
Gregorio Quintana OrtÍ ◽  
Nathan Heavner ◽  
Robert van de Geijn

2014 ◽  
Vol 23 (05) ◽  
pp. 1450007 ◽  
Author(s):  
Alexandros Panteli ◽  
Manolis Maragoudakis ◽  
Stefanos Gritzalis

This paper presents a privacy preserving protocol for the computation of a Radial Basis Function (RBF) neural network model between N participants which share horizontally partitioned datasets. The RBF model is used for regression analysis tasks. The novel aspect of the proposed protocol lies to the fact that it assumes a malicious user model and does not use homomorphic cryptographic methods, which are inherently only suited for a semi-trusted user environment. The performance analysis shows that the communication overhead is low enough to warranty its use while the computational complexity is identical in most cases with the centralized computation scenario (e.g. a trusted third party). The accuracy of the output model is only marginally subpar to a centralized computation on the union of all datasets.


2019 ◽  
Author(s):  
◽  
Luka Malenica

The main objective of this thesis is to utilize the powerful approximation properties of spline basis functions for numerical solutions of engineering problems that arise in the field of fluid mechanics. Special types of spline functions, the so-called Fup basis functions, are used as representative members of the spline family. However, the techniques developed in this work are quite general with respect to the choice of different spline functions. The application of this work is twofold. The first practical goal is the development of a novel numerical model for groundwater flow in karst aquifers. The concept of isogeometric analysis (IGA) is presented as a unified framework for multiscale representation of the geometry, material heterogeneity and solution. Moreover, this fundamentally higher-order approach enables the description of all fields as continuous and smooth functions by using a linear combination of spline basis functions. Since classical IGA uses the Galerkin and collocation approach, in this thesis, a third concept, in the form of control volume isogeometric analysis (CV-IGA), is developed and set as the foundation for the development of a karst flow numerical model. A discrete-continuum (hybrid) approach is used, in which a three-dimensional laminar matrix flow is coupled with a one-dimensional turbulent conduit flow. The model is capable of describing variably saturated conditions in both flow domains. Since realistic verification of karst flow models is an extremely difficult task, the particular contribution of this work is the construction of a specially designed 3D physical model (dimensions: 5.66 x 2.95 x 2.00 m) to verify the developed numerical model under controlled laboratory conditions. As a second application, this thesis presents the development of a full space-time adaptive collocation algorithm with particular application to advection-dominated problems. Since these problems are usually characterized by numerical instabilities, the novel adaptive algorithm accurately resolves small-scale features while controlling the numerical error and spurious numerical oscillations without need for any special stabilization technique. The previously developed spatial adaptive strategy dynamically changes the computational grid at each global time step, while the novel adaptive temporal strategy uses different local time steps for different collocation points based on the estimation of the temporal discretization error. Thus, in parts of the domain where temporal changes are demanding, the algorithm uses smaller local time steps, while in other parts, larger local time steps can be used without affecting the overall solution accuracy and stability. In contrast to existing local time stepping methods, the developed method is applicable to implicit discretization and resolves all temporal scales independently of the spatial scales. The efficiency and accuracy of the full space-time adaptive algorithm is verified with some classic 1D and 2D advection-diffusion benchmark test cases.


2006 ◽  
Vol 16 (03) ◽  
pp. 371-379 ◽  
Author(s):  
GABRIEL OKŠA ◽  
MARIÁN VAJTERŠIC

We show experimentally, that the QR factorization with the complete column pivoting, optionally followed by the LQ factorization of the R-factor, can lead to a substantial decrease of the number of outer parallel iteration steps in the parallel block-Jacobi SVD algorithm, whereby the details depend on the condition number and on the shape of spectrum, including the multiplicity of singular values. Best results were achieved for well-conditioned matrices with a multiple minimal singular value, where the number of parallel iteration steps has been reduced by two orders of magnitude. However, the gain in speed, as measured by the total parallel execution time, depends decisively on how efficient is the implementation of the distributed QR and LQ factorizations on a given parallel architecture. In general, the reduction of the total parallel execution time up to one order of magnitude has been achieved.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Guang Pan ◽  
Pengcheng Ye ◽  
Peng Wang ◽  
Zhidong Yang

Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples.


Author(s):  
Edoardo Ragusa ◽  
Tommaso Apicella ◽  
Christian Gianoglio ◽  
Rodolfo Zunino ◽  
Paolo Gastaldo

AbstractEmbedding the ability of sentiment analysis in smart devices is especially challenging because sentiment analysis relies on deep neural networks, in particular, convolutional neural networks. The paper presents a novel hardware-friendly detector of image polarity, enhanced with the ability of saliency detection. The approach stems from a hardware-oriented design process, which trades off prediction accuracy and computational resources. The eventual solution combines lightweight deep-learning architectures and post-training quantization. Experimental results on standard benchmarks confirmed that the design strategy can infer automatically the salient parts and the polarity of an image with high accuracy. Saliency-based solutions in the literature prove impractical due to their considerable computational costs; the paper shows that the novel design strategy can deploy and perform successfully on a variety of commercial smartphones, yielding real-time performances.


2021 ◽  
Vol 61 (SI) ◽  
pp. 148-154
Author(s):  
Karel Segeth

Data measuring and further processing is the fundamental activity in all branches of science and technology. Data interpolation has been an important part of computational mathematics for a long time. In the paper, we are concerned with the interpolation by polyharmonic splines in an arbitrary dimension. We show the connection of this interpolation with the interpolation by radial basis functions and the smooth interpolation by generating functions, which provide means for minimizing the L2 norm of chosen derivatives of the interpolant. This can be useful in 2D and 3D, e.g., in the construction of geographic information systems or computer aided geometric design. We prove the properties of the piecewise polyharmonic spline interpolant and present a simple 1D example to illustratethem.


2005 ◽  
Vol 53 (3) ◽  
pp. 1154-1162 ◽  
Author(s):  
Wei Bing Lu ◽  
Tie Jun Cui ◽  
Xiao Xing Yin ◽  
Zhi Guo Qian ◽  
Wei Hong

Sign in / Sign up

Export Citation Format

Share Document