rounding errors
Recently Published Documents


TOTAL DOCUMENTS

175
(FIVE YEARS 36)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 2131 (2) ◽  
pp. 022046
Author(s):  
D A Rudikov ◽  
A S Ilinykh

Abstract The implementation precision of a number of adjustment bodies of a metal-cutting machine is also the most important indicator of its quality, a strictly standardized industry standard, technical conditions for manufacturing and acceptance. Moreover, the standard for limiting the error is set depending on the used denominator of the series. An essential feature of the precision of the series being implemented is that it is determined not by an error in parts’ manufacturing, but by the disadvantages of the used method of kinematic calculation. The established modes largely determine the efficiency of processing on metal-cutting machines. If the setting is set to an underestimated mode, then the performance is reduced accordingly. In the case of the mode overestimation, this leads to a decrease in durability and losses due to increased regrinding and tool changes. Creation of a complex of mathematical models for the design kinematic calculation of the metal-cutting machines’ main movement drive, which allows reducing the error in the implementation of a series of preferred numbers and increasing machining precision. The article provides a mathematical complex for analyzing the total error components, which allows determining and evaluating the total error of the drive of a metal-cutting machine by analyzing its constituent values with high precision: errors of a permanent part, errors of a multiplier part, rounding errors of standard numbers, errors in the electric motor and belt transmission. The presented complex helps to identify the role of the rounding error of preferred numbers in the total relative error formation and makes it possible to reduce it, which allows solving the problem of increasing the step adjustable drive precision. When using a mathematical complex, a fundamentally new opportunity for creating a scientific base appears, developing algorithms and programs for engineering calculation of tables that facilitate the selection of the numbers of teeth for multiple groups, structures and guaranteeing high precision of the implemented series.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sheng Zeng ◽  
Guohua Geng ◽  
Hongjuan Gao ◽  
Mingquan Zhou

AbstractGeometry images parameterise a mesh with a square domain and store the information in a single chart. A one-to-one correspondence between the 2D plane and the 3D model is convenient for processing 3D models. However, the parameterised vertices are not all located at the intersection of the gridlines the existing geometry images. Thus, errors are unavoidable when a 3D mesh is reconstructed from the chart. In this paper, we propose parameterise surface onto a novel geometry image that preserves the constraint of topological neighbourhood information at integer coordinate points on a 2D grid and ensures that the shape of the reconstructed 3D mesh does not change from supplemented image data. We find a collection of edges that opens the mesh into simply connected surface with a single boundary. The point distribution with approximate blue noise spectral characteristics is computed by capacity-constrained delaunay triangulation without retriangulation. We move the vertices to the constrained mesh intersection, adjust the degenerate triangles on a regular grid, and fill the blank part by performing a local affine transformation between each triangle in the mesh and image. Unlike other geometry images, the proposed method results in no error in the reconstructed surface model when floating-point data are stored in the image. High reconstruction accuracy is achieved when the xyz positions are in a 16-bit data format in each image channel because only rounding errors exist in the topology-preserving geometry images, there are no sampling errors. This method performs one-to-one mapping between the 3D surface mesh and the points in the 2D image, while foldovers do not appear in the 2D triangular mesh, maintaining the topological structure. This also shows the potential of using a 2D image processing algorithm to process 3D models.


2021 ◽  
Vol 5 (4) ◽  
pp. 214
Author(s):  
Aleksandra Tutueva ◽  
Denis Butusov

Dynamical degradation is a known problem in the computer simulation of chaotic systems. Data type limitations, sampling, and rounding errors give rise to the periodic behavior. In applications of chaotic systems in secure communication and cryptography systems, such effects can reduce data storage security and operation. In this study, we considered a possible solution to this problem by using semi-explicit integration. The key idea is to perturb the chaotic trajectory by switching between two integrators, which possess close but still different numerical solutions. Compared with the traditional approach based on the perturbation of the bifurcation parameter, this technique does not significantly change the nonlinear properties of the system. We verify the efficiency of the proposed perturbation method through several numerical experiments using the well-known Rössler oscillator.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-23
Author(s):  
Robert Rabe ◽  
Anastasiia Izycheva ◽  
Eva Darulova

Efficient numerical programs are required for proper functioning of many systems. Today’s tools offer a variety of optimizations to generate efficient floating-point implementations that are specific to a program’s input domain. However, sound optimizations are of an “all or nothing” fashion with respect to this input domain—if an optimizer cannot improve a program on the specified input domain, it will conclude that no optimization is possible. In general, though, different parts of the input domain exhibit different rounding errors and thus have different optimization potential. We present the first regime inference technique for sound optimizations that automatically infers an effective subdivision of a program’s input domain such that individual sub-domains can be optimized more aggressively. Our algorithm is general; we have instantiated it with mixed-precision tuning and rewriting optimizations to improve performance and accuracy, respectively. Our evaluation on a standard benchmark set shows that with our inferred regimes, we can, on average, improve performance by 65% and accuracy by 54% with respect to whole-domain optimizations.


Author(s):  
Valerii Zadiraka ◽  
Inna Shvidchenko

Introduction. When solving problems of transcomputational complexity, the problem of evaluating the rounding error is relevant, since it can be dominant in evaluating the accuracy of solving the problem. The ways to reduce it are important, as are the reserves for optimizing the algorithms for solving the problem in terms of accuracy. In this case, you need to take into account the rounding-off rules and calculation modes. The article shows how the estimates of the rounding error can be used in modern computer technologies for solving problems of computational, applied mathematics, as well as information security. The purpose of the article is to draw the attention of the specialists in computational and applied mathematics to the need to take into account the rounding error when analyzing the quality of the approximate solution of problems. This is important for mathematical modeling problems, problems using Bigdata, digital signal and image processing, cybersecurity, and many others. The article demonstrates specific estimates of the rounding error for solving a number of problems: estimating the mathematical expectation, calculating the discrete Fourier transform, using multi-digit arithmetic and using the estimates of the rounding error in algorithms for solving computer steganography problems. The results. The estimates of the rounding error of the algorithms for solving the above-mentioned classes of problems are given for different rounding-off rules and for different calculation modes. For the problem of constructing computer steganography, the use of the estimates of the rounding error in computer technologies for solving problems of hidden information transfer is shown. Conclusions. Taking into account the rounding error is an important factor in assessing the accuracy of the approximate solution of problems of the complexity above average. Keywords: rounding error, computer technology, discrete Fourier transform, multi-digit arithmetic, computer steganography.


Author(s):  
Aleksandr A. Belov ◽  
Valentin S. Khokhlachev

In many applied problems, efficient calculation of quadratures with high accuracy is required. The examples are: calculation of special functions of mathematical physics, calculation of Fourier coefficients of a given function, Fourier and Laplace transformations, numerical solution of integral equations, solution of boundary value problems for partial differential equations in integral form, etc. For grid calculation of quadratures, the trapezoidal, the mean and the Simpson methods are usually used. Commonly, the error of these methods depends quadratically on the grid step, and a large number of steps are required to obtain good accuracy. However, there are some cases when the error of the trapezoidal method depends on the step value not quadratically, but exponentially. Such cases are integral of a periodic function over the full period and the integral over the entire real axis of a function that decreases rapidly enough at infinity. If the integrand has poles of the first order on the complex plane, then the Trefethen-Weidemann majorant accuracy estimates are valid for such quadratures. In the present paper, new error estimates of exponentially converging quadratures from periodic functions over the full period are constructed. The integrand function can have an arbitrary number of poles of an integer order on the complex plane. If the grid is sufficiently detailed, i.e., it resolves the profile of the integrand function, then the proposed estimates are not majorant, but asymptotically sharp. Extrapolating, i.e., excluding this error from the numerical quadrature, it is possible to calculate the integrals of these classes with the accuracy of rounding errors already on extremely coarse grids containing only 10 steps.


2021 ◽  
Vol 9 (3) ◽  
pp. 186-195
Author(s):  
Alexander Zelensky ◽  
Tagir Abdullin ◽  
Andrei Alepko

In this paper we considered the problem of an s-shaped acceleration/deceleration curve constructing in real time with linear-spline interpolation, taking into account given constraints on the contour acceleration, jerk, and feed rate. The input data for the s-curve were defined in the lookahead algorithm and the geometric smoothing module. The choice of a particular acceleration/deceleration strategy depends on the segment length, the allowable feed rates at the segment junction, and the given kinematic constraints. Each trajectory segment can have a maximum of seven time intervals and their rounding will produce inaccuracies when forming the velocity profile. Therefore, to compensate for rounding errors, the method of half division was applied, which made it possible to remove gaps in the velocity contour. The experimental data obtained indicate the correctness of the chosen approach for its implementation as part of the CNC for high-speed machining of surfaces with complex shapes. Key words Feed-rate planning algorithm, s-shaped acceleration/deceleration, trajectory smoothing, numerical control system, real-time, frame preview algorithm. Acknowledgements The research was carried out with financial support of Ministry of Science and Higher Education of Russian Federation in the frame of state assignment (project no.FSFS-2020-0031).


2021 ◽  
Author(s):  
Theresa Bender ◽  
Tim Seidler ◽  
Philipp Bengel ◽  
Ulrich Sax ◽  
Dagmar Krefting

Automatic electrocardiogram (ECG) analysis has been one of the very early use cases for computer assisted diagnosis (CAD). Most ECG devices provide some level of automatic ECG analysis. In the recent years, Deep Learning (DL) is increasingly used for this task, with the first models that claim to perform better than human physicians. In this manuscript, a pilot study is conducted to evaluate the added value of such a DL model to existing built-in analysis with respect to clinical relevance. 29 12-lead ECGs have been analyzed with a published DL model and results are compared to build-in analysis and clinical diagnosis. We could not reproduce the results of the test data exactly, presumably due to a different runtime environment. However, the errors were in the order of rounding errors and did not affect the final classification. The excellent performance in detection of left bundle branch block and atrial fibrillation that was reported in the publication could be reproduced. The DL method and the built-in method performed similarly good for the chosen cases regarding clinical relevance. While benefit of the DL method for research can be attested and usage in training can be envisioned, evaluation of added value in clinical practice would require a more comprehensive study with further and more complex cases.


2021 ◽  
Vol 21 (2) ◽  
pp. 155-173
Author(s):  
Stefan Ritter
Keyword(s):  

2021 ◽  
Vol 11 (12) ◽  
pp. 5474
Author(s):  
Tuomo Poutanen

This article addresses the process to optimally select safety factors and characteristic values for the Eurocodes. Five amendments to the present codes are proposed: (1) The load factors are fixed, γG = γQ, by making the characteristic load of the variable load changeable, it simplifies the codes and lessens the calculation work. (2) Currently, the characteristic load of the variable load is the same for all variable loads. It creates excess safety and material waste for the variable loads with low variation. This deficiency can be avoided by applying the same amendment as above. (3) Various materials fit with different accuracy in the reliability model. This article explains two options to reduce this difficulty. (4) A method to avoid rounding errors in the safety factors is explained. (5) The current safety factors are usually set by minimizing the reliability indexes regarding the target when the obtained codes include considerable safe and unsafe design cases with the variability ratio (high reliability/low) of about 1.4. The proposed three code models match the target β50 = 3.2 with high accuracy, no unsafe design cases and insignificant safe design cases with the variability ratio 1.07, 1.03 and 1.04.


Sign in / Sign up

Export Citation Format

Share Document