approximation error
Recently Published Documents


TOTAL DOCUMENTS

647
(FIVE YEARS 222)

H-INDEX

32
(FIVE YEARS 6)

2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Author(s):  
Franco Maria Nardini ◽  
Roberto Trani ◽  
Rossano Venturini

Modern search services often provide multiple options to rank the search results, e.g., sort “by relevance”, “by price” or “by discount” in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, ϵ-Filtering, which, given an allowed approximation error ϵ, finds a (1-ϵ)–optimal filtering, i.e., the relevance of its solution is at least (1-ϵ) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while ϵ-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that ϵ-Filtering can achieve quasi-optimal solutions while being faster than all state-of-the-art competitors in most of the tested configurations.


Author(s):  
Tapio Helin ◽  
Remo Kretschmann

AbstractIn this paper we study properties of the Laplace approximation of the posterior distribution arising in nonlinear Bayesian inverse problems. Our work is motivated by Schillings et al. (Numer Math 145:915–971, 2020. 10.1007/s00211-020-01131-1), where it is shown that in such a setting the Laplace approximation error in Hellinger distance converges to zero in the order of the noise level. Here, we prove novel error estimates for a given noise level that also quantify the effect due to the nonlinearity of the forward mapping and the dimension of the problem. In particular, we are interested in settings in which a linear forward mapping is perturbed by a small nonlinear mapping. Our results indicate that in this case, the Laplace approximation error is of the size of the perturbation. The paper provides insight into Bayesian inference in nonlinear inverse problems, where linearization of the forward mapping has suitable approximation properties.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 214
Author(s):  
Kyuahn Kwon ◽  
Jaeyong Chung

Large-scale neural networks have attracted much attention for surprising results in various cognitive tasks such as object detection and image classification. However, the large number of weight parameters in the complex networks can be problematic when the models are deployed to embedded systems. In addition, the problems are exacerbated in emerging neuromorphic computers, where each weight parameter is stored within a synapse, the primary computational resource of the bio-inspired computers. We describe an effective way of reducing the parameters by a recursive tensor factorization method. Applying the singular value decomposition in a recursive manner decomposes a tensor that represents the weight parameters. Then, the tensor is approximated by algorithms minimizing the approximation error and the number of parameters. This process factorizes a given network, yielding a deeper, less dense, and weight-shared network with good initial weights, which can be fine-tuned by gradient descent.


2022 ◽  
Vol 44 (1) ◽  
pp. A28-A56
Author(s):  
Maximilien Germain ◽  
Huyên Pham ◽  
Xavier Warin

2021 ◽  
pp. 1-39
Author(s):  
Jochen Schmid

We deal with monotonic regression of multivariate functions [Formula: see text] on a compact rectangular domain [Formula: see text] in [Formula: see text], where monotonicity is understood in a generalized sense: as isotonicity in some coordinate directions and antitonicity in some other coordinate directions. As usual, the monotonic regression of a given function [Formula: see text] is the monotonic function [Formula: see text] that has the smallest (weighted) mean-squared distance from [Formula: see text]. We establish a simple general approach to compute monotonic regression functions: namely, we show that the monotonic regression [Formula: see text] of a given function [Formula: see text] can be approximated arbitrarily well — with simple bounds on the approximation error in both the [Formula: see text]-norm and the [Formula: see text]-norm — by the monotonic regression [Formula: see text] of grid-constant functions [Formula: see text]. monotonic regression algorithms. We also establish the continuity of the monotonic regression [Formula: see text] of a continuous function [Formula: see text] along with an explicit averaging formula for [Formula: see text]. And finally, we deal with generalized monotonic regression where the mean-squared distance from standard monotonic regression is replaced by more complex distance measures which arise, for instance, in maximum smoothed likelihood estimation. We will see that the solution of such generalized monotonic regression problems is simply given by the standard monotonic regression [Formula: see text].


2021 ◽  
Vol 6 (1) ◽  
pp. 2
Author(s):  
Khadijeh Sadri ◽  
Kamyar Hosseini ◽  
Dumitru Baleanu ◽  
Soheil Salahshour ◽  
Choonkil Park

In the present work, the numerical solution of fractional delay integro-differential equations (FDIDEs) with weakly singular kernels is addressed by designing a Vieta–Fibonacci collocation method. These equations play immense roles in scientific fields, such as astrophysics, economy, control, biology, and electro-dynamics. The emerged fractional derivative is in the Caputo sense. By resultant operational matrices related to the Vieta–Fibonacci polynomials (VFPs) for the first time accompanied by the collocation method, the problem taken into consideration is converted into a system of algebraic equations, the solving of which leads to an approximate solution to the main problem. The existence and uniqueness of the solution of this category of fractional delay singular integro-differential equations (FDSIDEs) are investigated and proved using Krasnoselskii’s fixed-point theorem. A new formula for extracting the VFPs and their derivatives is given, and the orthogonality of the derivatives of VFPs is easily proved via it. An error bound of the residual function is estimated in a Vieta–Fibonacci-weighted Sobolev space, which shows that by properly choosing the number of terms of the series solution, the approximation error tends to zero. Ultimately, the designed algorithm is examined on four FDIDEs, whose results display the simple implementation and accuracy of the proposed scheme, compared to ones obtained from previous methods. Furthermore, the orthogonality of the VFPs leads to having sparse operational matrices, which makes the execution of the presented method easy.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 349
Author(s):  
Joël Chaskalovic

A probabilistic approach is developed for the exact solution u to a deterministic partial differential equation as well as for its associated approximation uh(k) performed by Pk Lagrange finite element. Two limitations motivated our approach: On the one hand, the inability to determine the exact solution u relative to a given partial differential equation (which initially motivates one to approximating it) and, on the other hand, the existence of uncertainties associated with the numerical approximation uh(k). We, thus, fill this knowledge gap by considering the exact solution u together with its corresponding approximation uh(k) as random variables. By a method of consequence, any function where u and uh(k) are involved are modeled as random variables as well. In this paper, we focus our analysis on a variational formulation defined on Wm,p Sobolev spaces and the corresponding a priori estimates of the exact solution u and its approximation uh(k) in order to consider their respective Wm,p-norm as a random variable, as well as the Wm,p approximation error with regards to Pk finite elements. This will enable us to derive a new probability distribution to evaluate the relative accuracy between two Lagrange finite elements Pk1 and Pk2,(k1<k2).


2021 ◽  
Vol 101 (4) ◽  
pp. 406-421
Author(s):  
Jan Fojtík ◽  
Jiří Procházka ◽  
Pavel Zimmermann

Valuation of the insurance portfolio is one of the essential actuarial tasks. Life insurance valuation is usually based on a projection of cash flows for each policy which is demanding computation time. Furthermore, modern financial management requires multiple valuations under different scenarios or input parameters. A method to reduce computation time while preserving as much accuracy as possible based on cluster analysis is presented. The basic idea of the method is to replace the original portfolio by a smaller representative portfolio based on clusters with some weights that would ensure the similarity of the valuation results to the original portfolio. Valuation is then significantly faster but requires initial time for clustering and the results are only approximate – different from the original results. The difference is studied for a different number of clusters and the trade-off between the approximation error and calculation time is evaluated.


Author(s):  
Haoyang Ye ◽  
Stephen F Gull ◽  
Sze M Tan ◽  
Bojan Nikolic

Abstract With the development of modern radio interferometers, wide-field continuum surveys have been planned and undertaken, for which accurate wide-field imaging methods are essential. Based on the widely-used W-stacking method, we propose a new wide-field imaging algorithm that can synthesize visibility data from a model of the sky brightness via degridding, able to construct dirty maps from measured visibility data via gridding. Results carry the smallest approximation error yet achieved relative to the exact calculation involving the direct Fourier transform. In contrast to the original W-stacking method, the new algorithm performs least-misfit optimal gridding (and degridding) in all three directions, and is capable of achieving much higher accuracy than is feasible with the original algorithm. In particular, accuracy at the level of single precision arithmetic is readily achieved by choosing a least-misfit convolution function of width W = 7 and an image cropping parameter of x0 = 0.25. If the accuracy required is only that attained by the original W-stacking method, the computational cost for both the gridding and FFT steps can be substantially reduced using the proposed method by making an appropriate choice of the width and image cropping parameters.


Author(s):  
Hongjuan Li ◽  
Tianliang Zhang ◽  
Ming Tie ◽  
Yongfu WANG

Abstract This paper proposes an adaptive higher-order sliding mode (AHOSM) control method based on the adaptive fuzzy logic system for steer-by-wire (SbW) system to achieve the tracking control of the front wheels steering angle. First, an adaptive fuzzy logic system is adopted to estimate the unknown dynamics of the SbW system. Then, the AHOSM control is constructed to overcome the lumped uncertainties including unknown external perturbation and fuzzy logic system approximation error, and has the advantage of attenuating the chattering caused by the discontinuous control signal. Finally, the adaptation scheme is designed for the dynamic gain of the proposed AHOSM controller without a priori knowledge of the bounds of the uncertainties. In contrast to the existing controllers applied in the SbW system, this controller has a better control performance in practical application. By means of Lyapunov stability analysis, it is theoretically proved that the system trajectory converges to an adjustable neighborhood of the origin in finite time. Simulations and vehicle experiments are carried out to verify the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document