taylor approximation
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 28)

H-INDEX

9
(FIVE YEARS 2)

AppliedMath ◽  
2022 ◽  
Vol 2 (1) ◽  
pp. 39-53
Author(s):  
Jaya P. N. Bishwal

For stationary ergodic diffusions satisfying nonlinear homogeneous Itô stochastic differential equations, this paper obtains the Berry–Esseen bounds on the rates of convergence to normality of the distributions of the quasi maximum likelihood estimators based on stochastic Taylor approximation, under some regularity conditions, when the diffusion is observed at equally spaced dense time points over a long time interval, the high-frequency regime. It shows that the higher-order stochastic Taylor approximation-based estimators perform better than the basic Euler approximation in the sense of having smaller asymptotic variance.


2021 ◽  
Vol 11 (24) ◽  
pp. 11612
Author(s):  
Je-An Kim ◽  
Joon-Ho Lee

In this paper, performance analysis of the cross-eye jamming effect under mechanical defects is dealt with. By using a numerical analysis-based approach, the performance analysis method proposed in this paper is closer to the not approximated empirical mean square difference (MSD) than the first-order Taylor approximation-based performance analysis method and the second-order Taylor approximation-based performance analysis method proposed in previous studies. In other words, the effects of amplitude ratio perturbation and phase difference perturbation on performance degradation are quantitatively analyzed. Note that the numerical integration is adopted to derive an analytic expression of the MSD.


2021 ◽  
Vol 13 (4(J)) ◽  
pp. 1-7
Author(s):  
Jung S. You ◽  
Minsoo Jeong

In this paper, we compare the finite sample performances of various bootstrap methods for diffusion processes. Though diffusion processes are widely used to analyze stocks, bonds, and many other financial derivatives, they are known to heavily suffer from size distortions of hypothesis tests. While there are many bootstrap methods applicable to diffusion models to reduce such size distortions, their finite sample performances are yet to be investigated. We perform a Monte Carlo simulation comparing the finite sample properties, and our results show that the strong Taylor approximation method produces the best performance, followed by the Hermite expansion method.


2021 ◽  
Vol 10 (3) ◽  
pp. 312
Author(s):  
Ibrahim Elkhalil Behmene ◽  
Benabdallah Bachir Bouiadjra ◽  
Mohamed Daoudi ◽  
Abdelkader Homrani

These observations are intended to provide information on the growth of the African catfish (Clarias gariepinus) living in the Oued Takhamalte-Illizi South-East Algerian. The basic data are the frequencies of the fish sizes resulting from the experimental fisheries carried out in October 2019 in Oued Takhamalte South-East of Algeria. The ELEFAN I program incorporated into FiSAT II software was used. The Powell-Wetherall method provides an L∞ of the order of 53.84 cm and a Z/K of 3.254 with a correlation coefficient R = - 0.944. This asymptotic length is greater than the maximum observed value (Lmax = 50 cm) and the Taylor approximation (L max/0.95 = 52.63 cm). The corresponding K value (0.28/year) seems the most suitable for the growth of a species, for this we opted for the parameters obtained by the sub-program « area of equal responses» of the program ELEFAN I (L∞ = 53 cm and K = 0.28/year) for the continuation of our study. The reduced gap test proves that there is a significant difference between the observed slope (b = 2.41) and the theoretical slope (P = 3), which makes it possible to affirm that the height-weight relationship in both sexes of C. gariepinus shows minor allometry, meaning that the weight grows slower than the cube of length.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2018
Author(s):  
Javier Ibáñez ◽  
Jorge Sastre ◽  
Pedro Ruiz ◽  
José M. Alonso ◽  
Emilio Defez

The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. In this work, we present a Taylor series algorithm, based on the free-transformation approach of the inverse scaling and squaring technique, that uses recent matrix polynomial formulas for evaluating the Taylor approximation of the matrix logarithm more efficiently than the Paterson–Stockmeyer method. Two MATLAB implementations of this algorithm, related to relative forward or backward error analysis, were developed and compared with different state-of-the art MATLAB functions. Numerical tests showed that the new implementations are generally more accurate than the previously available codes, with an intermediate execution time among all the codes in comparison.


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


Author(s):  
S. F. Maassen ◽  
H. Erdle ◽  
S. Pulvermacher ◽  
D. Brands ◽  
T. Böhlke ◽  
...  

AbstractThe resulting shapes in production processes of metal components are strongly influenced by deformation induced residual stresses. Dual-phase steels are commonly used for industrial application of, e.g., forged or deep-drawn structural parts. This is due to their ability to handle high plastic deformations, while retaining desired stiffness for the products. In order to influence the resulting shape as well as component characteristics positively it is important to predict the distribution of phase-specific residual stresses which occur on the microscale of the material. In this contribution a comparative study is presented, where two approaches for the numerical simulation of residual stresses are applied. On the one hand a numerically efficient mean field theory is used to estimate on the grain level the total strain, the plastic strains and the eigenstrains based on macroscopic stress, strain and stiffness data. An alternative ansatz relies on a Taylor approximation for the grain level strains. Both approaches are applied to the corrosion-resistant duplex steel X2CrNiMoN22-5-3 (1.4462), which consists of a ferritic and an austenitic phase with the same volume fraction. Mean field and Taylor approximation strategies are implemented for usage in three dimensional solid finite element analysis and a geometrically exact Euler–Bernoulli beam for the simulation of a four-point-bending test. The predicted residual stresses are compared to experimental data from bending experiments for the phase-specific residual stresses/strains which have been determined by neutron diffraction over the bending height of the specimen.


Sign in / Sign up

Export Citation Format

Share Document