multivariate interpolation
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 5)

H-INDEX

19
(FIVE YEARS 1)

2021 ◽  
Vol 107 ◽  
pp. 1-22 ◽  
Author(s):  
Erick Rodriguez Bazan ◽  
Evelyne Hubert

Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


2020 ◽  
Vol 226 ◽  
pp. 02007
Author(s):  
Galmandakh Chuluunbaatar ◽  
Alexander A. Gusev ◽  
Ochbadrakh Chuluunbaatar ◽  
Vladimir P. Gerdt ◽  
Sergue I. Vinitsky ◽  
...  

A new algorithm for constructing multivariate interpolation Hermite polynomials in analytical form in a multidimensional hypercube is presented. These polynomials are determined from a specially constructed set of values of the polynomials themselves and their partial derivatives with continuous derivatives up to a given order on the boundaries of the finite elements. The effciency of the finite element schemes, algor thms and programs is demonstrated by solving the Helmholtz problem for a cube.


2019 ◽  
Vol 64 (12) ◽  
pp. 1439-1452 ◽  
Author(s):  
Shuangshuang Li ◽  
Sokchhay Heng ◽  
Sokly Siev ◽  
Chihiro Yoshimura ◽  
Oliver Saavedra ◽  
...  

Atmosphere ◽  
2018 ◽  
Vol 9 (5) ◽  
pp. 194 ◽  
Author(s):  
Miao Feng ◽  
Weimin Zhang ◽  
Xiangru Zhu ◽  
Boheng Duan ◽  
Mengbin Zhu ◽  
...  

Author(s):  
Thomas C. H. Lux ◽  
Layne T. Watson ◽  
Tyler H. Chang ◽  
Jon Bernard ◽  
Bo Li ◽  
...  

Author(s):  
Martin Teigen ◽  
Malik M. Ibrahim

The method of using residual curvature during pipeline installation, primarily for the purpose of lateral buckling control, has caught an increasing amount of attention over the past few years [1], [9]. The use of residual curvature sparked a particular interest after positive experiences from a 26 km long pipeline on Statoil’s Skuld project (2012) in the Norwegian Sea [7]. As such, a range of technical papers elaborating on the topic have recently been published [6], [7], [9]. Some of this work has identified some particularly novel applications for the residual curvature method including freespan mitigation to reduce the requirement for seabed intervention, allowing for direct pipeline tie-ins, use with s-lay installation and even for steel catenary risers [10], [11]. However, these applications are currently only identified and not yet proven successful in any published work. This technical paper focusses on validating the use of residual curvature for the purpose of lateral buckling control in subsea pipelines installed by reel-lay. The residual curvature method demonstrates high buckling reliability without the use of subsea structures or additional installation equipment, with a controlled buckle response and favourable operational bending moments [1]. The residual curvature method has been shown less sensitive to some design parameters than other lateral buckling control methods [6]. However, published work also show that high strains will develop for short residual curvature lengths, high pipe-seabed frictions and for certain levels of residual strains [6]. Previous research has predicted the behaviour of residual curvature as a means of controlling lateral buckling in a deterministic approach [6], [7], [9]. However, performing a lateral buckling design with a probabilistic approach can offer a more realistic design and demonstrate higher reliability. There is a range of research on probabilistic approaches for lateral buckling design of subsea pipelines, but there is little published work on the same approach for residual curvature in particular. For this reason, this paper suggests a method for determining the likelihood of buckling and the associated bending moments via structural reliability analysis (SRA). A numerical model combining Finite Element (FE) Analysis and a Monte Carlo simulation is applied. A similar approach has already been presented by others for a different lateral buckling control method, and involves forming a database of finite element solutions followed by multivariate interpolation for the stochastic variables [16]. The multivariate interpolation necessitates a permutation of the cases in an FE result database. In order to keep the simulation efficient, only a limited number of variables are treated as stochastic. The variables that are considered as stochastic are those that have been determined that the lateral buckling response due to residual curvature is sensitive to. The variations of the remaining parameters are also accounted for but in a simpler way. The suggested SRA is used to assess the reliability of a pipeline that resembles the Skuld pipeline. The proposed SRA validates that residual curvature is a reliable lateral buckling control method irrespective of great variations in the design parameters that cannot be quantified easily, such as target residual strain. The proposed SRA also serves as a cost attractive solution in the qualification testing, by potentially relieving the installation contractor from the expensive exercise of performing an additional straightening trial.


Sign in / Sign up

Export Citation Format

Share Document