approximation quality
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 30)

H-INDEX

10
(FIVE YEARS 2)

Author(s):  
Kenny Schlegel ◽  
Peer Neubert ◽  
Peter Protzel

AbstractVector Symbolic Architectures combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. Over the past years, several VSA implementations have been proposed. The available implementations differ in the underlying vector space and the particular implementations of the VSA operators. This paper provides an overview of eleven available VSA implementations and discusses their commonalities and differences in the underlying vector space and operators. We create a taxonomy of available binding operations and show an important ramification for non self-inverse binding operations using an example from analogical reasoning. A main contribution is the experimental comparison of the available implementations in order to evaluate (1) the capacity of bundles, (2) the approximation quality of non-exact unbinding operations, (3) the influence of combining binding and bundling operations on the query answering performance, and (4) the performance on two example applications: visual place- and language-recognition. We expect this comparison and systematization to be relevant for development of VSAs, and to support the selection of an appropriate VSA for a particular task. The implementations are available.


2021 ◽  
Vol 2021 (4) ◽  
pp. 89-103
Author(s):  
T.H. Smila ◽  
◽  
L.L. Pecherytsia ◽  

The current level of the design and use of new-generation spacecraft calls for a maximally automated ballistics support of engineering developments. An integral part of the solution of this problem is the development of an effective tool to adapt discrete functions of gas-dynamic characteristics to the solution of various problems that arise in the development and use of space complexes. Simplifying the use of bulky information arrays together with improving the accuracy of approximation of key coefficients will significantly improve the ballistics support quality. The aim of this work is to choose an optimum method for the approximation of a discrete function of two variable spacecraft aerodynamic characteristics. Based on the analysis of the advantages and drawbacks of basic methods of approximation by two fitting criteria: the maximum error and the root-mean-square deviation, recommendations on this choice were made. The methods were assessed by the example of the aerodynamic coefficients of the Sich-2M spacecraft’s simplified geometrical model tabulated as a function of the spacecraft orientation angles relative to the incident flow velocity. Multiparameter numerical studies were conducted for different approximation methods with varying the parameters of the approximation types under consideration and the approximation grid density. It was found that increasing the number of nodes of an input array does not always improve the accuracy of approximation. The node arrangement exerts a greater effect on the approximation quality. It was established that the most easily implementable method among those considered is a step interpolation, whose advantages are simplicity, quickness, and limitless possibilities in accuracy improvement, while its significant drawbacks are the lack of an analytical description and the dependence of the accuracy on the grid density. It was shown that spline functions feature the best approximating properties in comparison with other mathematical models. A polynomial approximation or any approximation by a general form function provide an analytical description with a single approximating function, but their accuracy of approximation is not so high as that provided by splines. It was found that there exists no approximation method that would be best by all criteria taken together: each method has some advantages, but at the same time, it has significant drawbacks too. An optimum approximation method is chosen according to the features of the problem, the priorities in approximation requirements, the required degree of approximation, and the initial data organization method.


Author(s):  
Riley Badenbroek ◽  
Etienne de Klerk

We develop a short-step interior point method to optimize a linear function over a convex body assuming that one only knows a membership oracle for this body. The approach is based a sketch of a universal interior point method using the so-called entropic barrier. It is well known that the gradient and Hessian of the entropic barrier can be approximated by sampling from Boltzmann-Gibbs distributions and the entropic barrier was shown to be self-concordant. The analysis of our algorithm uses properties of the entropic barrier, mixing times for hit-and-run random walks, approximation quality guarantees for the mean and covariance of a log-concave distribution, and results on inexact Newton-type methods.


Author(s):  
Cristina Bazgan ◽  
Stefan Ruzika ◽  
Clemens Thielen ◽  
Daniel Vanderpooten

AbstractWe determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions – both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.


Author(s):  
Zeineb Abderrahim ◽  
Mohamed Salim Bouhlel

The combination of compression and visualization is mentioned as perspective, very few articles treat with this problem. Indeed, in this paper, we proposed a new approach to multiresolution visualization based on a combination of segmentation and multiresolution mesh compression. For this, we proposed a new segmentation method that benefits the organization of faces of the mesh followed by a progressive local compression of regions of mesh to ensure the refinement local of the three-dimensional object. Thus, the quantization precision is adapted to each vertex during the encoding /decoding process to optimize the rate-distortion compromise. The optimization of the treated mesh geometry improves the approximation quality and the compression ratio at each level of resolution. The experimental results show that the proposed algorithm gives competitive results compared to the previous works dealing with the rate-distortion compromise and very satisfactory visual results.


2021 ◽  
Author(s):  
Ilya Mishev ◽  
Ruslan Rin

Abstract Combining the Perpendicular Bisector (PEBI) grids with the Two Point Flux Approximation (TPFA) scheme demonstrates a potential to accurately model on unstructured grids, conforming to the geological and engineering features of real grids. However, with the increased complexity and resolution of the grids, the PEBI conditions will inevitably be violated in some cells and the approximation properties will be compromised. The objective is to develop accurate and practical grid quality measures that quantify such errors. We critically evaluated the existing grid quality measures and found them lacking predictive power in several areas. The available k-orthogonality measures predict error for flow along the strata, although TPFA provides an accurate approximation. The false-positive results are not only misleading but can overwhelm further analysis. We developed the so-called "truncation error" grid measure which is probably the most accurate measure for flow through a plane face and accurately measures the error along the strata. We also quantified the error due to the face curvature. Curved faces are bound to exist in any real grid. The impact of the quality of the 2-D Delaunay triangulation on TPFA approximation properties is usually not taken into account. We investigate the impact of the size of the smallest angles that can cause considerable increase of the condition number of the matrix and an eventual loss of accuracy, demonstrated with simple examples. Based on the analysis, we provide recommendations. We also show how the size of the largest angles impacts the approximation quality of TPFA. Furthermore, we discuss the impact of the change of the permeability on the TPFA approximation. Finally, we present simple tools that reservoir engineers can use to incorporate the above-mentioned grid quality measures into a workflow. The grid quality measures discussed up to now are static. We also sketch the further extension to dynamic measures, that is, how the static measures can be used to detect change in the flow behavior, potentially leading to increased error. We investigate a comprehensive set of methods, several of them new, to measure the static grid quality of TPFA on PEBI grids and possible extension to dynamic measures. All measures can be easily implemented in production reservoir simulators and examined using the suggested tools in a workflow.


2021 ◽  
Vol 13 (4) ◽  
pp. 50-70
Author(s):  
Rudolf vetschera ◽  
Jonatas Araùjo de Almeida

Portfolio decision models have become an important branch of decision analysis. Portfolio problems are inherently complex, because of the combinatorial explosion in the number of portfolios that can be constructed even from a small number of items. To efficiently construct a set of portfolios that provide good performance in multiple criteria, methods that guide the search process are needed. Such methods require the calculation of bounds to estimate the performance of portfolios that can be obtained from a given partial portfolio. The calculation of such bounds is particularly difficult if interactions between items in the portfolio are possible. In the paper, the authors introduce a method to represent such interactions and develop various bounds that can be used in the presence of interactions. These methods are then tested in a computational study, where they show that the bounds they propose frequently provide a good approximation of actual outcomes, and also analyze specific properties of the problem that influence the approximation quality of the proposed bounds.


2021 ◽  
Author(s):  
DIOGO GARCIA ◽  
Andre Souto ◽  
Gustavo Sandri ◽  
Tomas Borges ◽  
Ricardo Queiroz

Geometry-based point cloud compression (G-PCC) has been rapidly evolving in the context of international standards. Despite the inherent scalability of octree-based geometry description, current G-PCC attribute compression techniques prevent full scalability for compressed point clouds. In this paper, we present a solution to add scalability to attributes compressed using the region-adaptive hierarchical transform (RAHT), enabling the reconstruction of the point cloud using only a portion of the original bitstream. Without the full geometry information, one cannot compute the weights in which the RAHT relies on to calculate its coefficients for further levels of detail. In order to overcome this problem, we propose a linear relationship approximation relating the downsampled point cloud to the truncated inverse RAHT coefficients at that same level. The linear relationship parameters are sent as side information. After truncating the bitstream at a point corresponding to a given octree level, we can, then, recreate the attributes at that level. Tests were carried out and results attest the good approximation quality of the proposed technique.


Author(s):  
Gregor Selinka ◽  
Raik Stolletz ◽  
Thomas I. Maindl

Many stochastic systems face a time-dependent demand. Especially in stochastic service systems, for example, in call centers, customers may leave the queue if their waiting time exceeds their personal patience. As discussed in the extant literature, it can be useful to use general distributions to model such customer patience. This paper analyzes the time-dependent performance of a multiserver queue with a nonhomogeneous Poisson arrival process with a time-dependent arrival rate, exponentially distributed processing times, and generally distributed time to abandon. Fast and accurate performance approximations are essential for decision support in such queueing systems, but the extant literature lacks appropriate methods for the setting we consider. To approximate time-dependent performance measures for small- and medium-sized systems, we develop a new stationary backlog-carryover (SBC) approach that allows for the analysis of underloaded and overloaded systems. Abandonments are considered in two steps of the algorithm: (i) in the approximation of the utilization as a reduced arrival stream and (ii) in the approximation of waiting-based performance measures with a stationary model for general abandonments. To improve the approximation quality, we discuss an adjustment to the interval lengths. We present a limit result that indicates convergence of the method for stationary parameters. The numerical study compares the approximation quality of different adjustments to the interval length. The new SBC approach is effective for instances with small numbers of time-dependent servers and gamma-distributed abandonment times with different coefficients of variation and for an empirical distribution of the abandonment times from real-world data obtained from a call center. A discrete-event simulation benchmark confirms that the SBC algorithm approximates the performance of the queueing system with abandonments very well for different parameter configurations. Summary of Contribution: The paper presents a fast and accurate numerical method to approximate the performance measures of a time‐dependent queueing system with generally distributed abandonments. The presented stationary backlog carryover approach with abandonment combines algorithmic ideas with stationary queueing models for generally distributed abandonment times. The reliability of the method is analyzed for transient systems and numerically studied with real‐world data.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1150
Author(s):  
Pawel Tadeusz Kazibudzki

There are numerous priority deriving methods (PDMs) for pairwise-comparison-based (PCB) problems. They are often examined within the Analytic Hierarchy Process (AHP), which applies the Principal Right Eigenvalue Method (PREV) in the process of prioritizing alternatives. It is known that when decision makers (DMs) are consistent with their preferences when making evaluations concerning various decision options, all available PDMs result in the same priority vector (PV). However, when the evaluations of DMs are inconsistent and their preferences concerning alternative solutions to a particular problem are not transitive (cardinally), the outcomes are often different. This research study examines selected PDMs in relation to their ranking credibility, which is assessed by relevant statistical measures. These measures determine the approximation quality of the selected PDMs. The examined estimates refer to the inconsistency of various Pairwise Comparison Matrices (PCMs)—i.e., W = (wij), wij > 0, where i, j = 1,…, n—which are obtained during the pairwise comparison simulation process examined with the application of Wolfram’s Mathematica Software. Thus, theoretical considerations are accompanied by Monte Carlo simulations that apply various scenarios for the PCM perturbation process and are designed for hypothetical three-level AHP frameworks. The examination results show the similarities and discrepancies among the examined PDMs from the perspective of their quality, which enriches the state of knowledge about the examined PCB prioritization methodology and provides further prospective opportunities.


Sign in / Sign up

Export Citation Format

Share Document