arbitrary precision
Recently Published Documents


TOTAL DOCUMENTS

160
(FIVE YEARS 44)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Siddharth Barman ◽  
Nidhi Rathi

This work develops algorithmic results for the classic cake-cutting problem in which a divisible, heterogeneous resource (modeled as a cake) needs to be partitioned among agents with distinct preferences. We focus on a standard formulation of cake cutting wherein each agent must receive a contiguous piece of the cake. Although multiple hardness results exist in this setup for finding fair/efficient cake divisions, we show that, if the value densities of the agents satisfy the monotone likelihood ratio property (MLRP), then strong algorithmic results hold for various notions of fairness and economic efficiency. Addressing cake-cutting instances with MLRP, first we develop an algorithm that finds cake divisions (with connected pieces) that are envy free, up to an arbitrary precision. The time complexity of our algorithm is polynomial in the number of agents and the bit complexity of an underlying Lipschitz constant. We obtain similar positive results for maximizing social, egalitarian, and Nash social welfare. Many distribution families bear MLRP. In particular, this property holds if all the value densities belong to any one of the following families: Gaussian (with the same variance), linear, Poisson, and exponential distributions, linear translations of any log-concave function. Hence, through MLRP, the current work obtains novel cake-cutting algorithms for multiple distribution families.


2021 ◽  
Author(s):  
Rafael A. F. Carniello ◽  
Wington L. Vital ◽  
Marcos Eduardo Valle

The universal approximation theorem ensures that any continuous real-valued function defined on a compact subset can be approximated with arbitrary precision by a single hidden layer neural network. In this paper, we show that the universal approximation theorem also holds for tessarine-valued neural networks. Precisely, any continuous tessarine-valued function can be approximated with arbitrary precision by a single hidden layer tessarine-valued neural network with split activation functions in the hidden layer. A simple numerical example, confirming the theoretical result and revealing the superior performance of a tessarine-valued neural network over a real-valued model for interpolating a vector-valued function, is presented in the paper.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fatema Tuz Zohora ◽  
M. Ziaur Rahman ◽  
Ngoc Hieu Tran ◽  
Lei Xin ◽  
Baozhen Shan ◽  
...  

AbstractA promising technique of discovering disease biomarkers is to measure the relative protein abundance in multiple biofluid samples through liquid chromatography with tandem mass spectrometry (LC-MS/MS) based quantitative proteomics. The key step involves peptide feature detection in the LC-MS map, along with its charge and intensity. Existing heuristic algorithms suffer from inaccurate parameters and human errors. As a solution, we propose PointIso, the first point cloud based arbitrary-precision deep learning network to address this problem. It consists of attention based scanning step for segmenting the multi-isotopic pattern of 3D peptide features along with the charge, and a sequence classification step for grouping those isotopes into potential peptide features. PointIso achieves 98% detection of high-quality MS/MS identified peptide features in a benchmark dataset. Next, the model is adapted for handling the additional ‘ion mobility’ dimension and achieves 4% higher detection than existing algorithms on the human proteome dataset. Besides contributing to the proteomics study, our novel segmentation technique should serve the general object detection domain as well.


2021 ◽  
Author(s):  
James Garland ◽  
David Gregg

Abstract Low-precision floating-point (FP) can be highly effective for convolutional neural network (CNN) inference. Custom low-precision FP can be implemented in field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) accelerators, but existing microprocessors do not generally support fast, custom precision FP. We propose hardware optimized bitslice-parallel floating-point operators (HOBFLOPS), a generator of efficient custom precision emulated bitslice-parallel software(C/C++) FP arithmetic. We generate custom-precision FP routines, optimized using a hardware synthesis design flow, to create circuits. We provide standard cell libraries matching the bitwise operations on the target microprocessor architecture and a code generator to translate the hardware circuits to bitslice software equivalents. We exploit bitslice parallelism to create a novel, very wide (32—512 element) vectorized CNN convolution for inference. On Arm and Intel processors, the multiply-accumulate (MAC) performance in CNN convolution of HOBFLOPS, Flexfloat, and Berkeley’s SoftFP are compared. HOBFLOPS outperforms Flexfloat by up to 10× on Intel AVX512. HOBFLOPS offers arbitrary-precision FP with custom range and precision, e . g ., HOBFLOPS9, which outperforms Flexfloat 9-bit on Arm Neon by 7×. HOBFLOPS allows researchers to prototype different levels of custom FP precision in the arithmetic of software CNN ac celerators. Furthermore, HOBFLOPS fast custom-precision FP CNNs may be valuable in cases where memory bandwidth is limited.


Author(s):  
Petr Tomášek ◽  
Karel Horák ◽  
Aditya Aradhye ◽  
Branislav Bošanský ◽  
Krishnendu Chatterjee

We study the two-player zero-sum extension of the partially observable stochastic shortest-path problem where one agent has only partial information about the environment. We formulate this problem as a partially observable stochastic game (POSG): given a set of target states and negative rewards for each transition, the player with imperfect information maximizes the expected undiscounted total reward until a target state is reached. The second player with the perfect information aims for the opposite. We base our formalism on POSGs with one-sided observability (OS-POSGs) and give the following contributions: (1) we introduce a novel heuristic search value iteration algorithm that iteratively solves depth-limited variants of the game, (2) we derive the bound on the depth guaranteeing an arbitrary precision, (3) we propose a novel upper-bound estimation that allows early terminations, and (4) we experimentally evaluate the algorithm on a pursuit-evasion game.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Matthias Kirchhart ◽  
Donat Weniger

Abstract We present simplified formulæ for the analytic integration of the Newton potential of polynomials over boxes in two- and three-dimensional space. These are implemented in an easy-to-use C++ library that allows computations in arbitrary precision arithmetic which is also documented here. We describe how these results can be combined with fast multipole methods to evaluate the Newton potential of more general, non-polynomial densities.


2021 ◽  
pp. 9-19
Author(s):  
Nikita Zelenchuk ◽  
◽  
Ekaterina Pristavka ◽  
Aleksandr Maliavko ◽  
◽  
...  

The implementation of the new multi-paradigm (functionally- imperative) programming language El, developed at the Department of Computer Science of the Novosibirsk State Technical University, in the form of a compiler is associated with the need to find ways to solve a number of complex problems. The current version of the compiler does implement only partially functionality of the language and generates far from optimal executable codes. In this paper, we consider the problem of an efficient compilation of an El-program, taking into account the need to implement new high-level data structures (two-sided lists, vectors with special forms of access, and a number of others) and control structures of the language, which make it possible to uniformly define cyclic and branching computational processes, as well as those laid down in the language a mechanism for explicitly controlling the mutability of variables. The tasks of improving and developing a compiler organized according to the classical multi-platform scheme are briefly considered, in which the front-end (lexical, syntactic, and semantic analyzers) converts the program to be translated into pseudocode of a single format, and used efficient infrastructure for building LLVM compilers as a back-end that turns pseudocode into executable code for different platforms. Execution of all possible operations on elements of high-level data structures (lists, tuples, vectors), as well as on arbitrary-precision numbers, has been moved to the runtime support library and, accordingly, can be deeply optimized. For this structure, the outlined ways of solving the problem of developing and improving the compiler by deep reforming and optimization of the chain of transformations of the translated program implemented by the front-end are formulated. At the initial stage, it is planned to implement a new compiler for two platforms: Linux and Windows.


2021 ◽  
Vol 15 ◽  
Author(s):  
James Paul Turner ◽  
Thomas Nowotny

Motivated by the challenge of investigating the reproducibility of spiking neural network simulations, we have developed the Arpra library: an open source C library for arbitrary precision range analysis based on the mixed Interval Arithmetic (IA)/Affine Arithmetic (AA) method. Arpra builds on this method by implementing a novel mixed trimmed IA/AA, in which the error terms of AA ranges are minimised using information from IA ranges. Overhead rounding error is minimised by computing intermediate values as extended precision variables using the MPFR library. This optimisation is most useful in cases where the ratio of overhead error to range width is high. Three novel affine term reduction strategies improve memory efficiency by merging affine terms of lesser significance. We also investigate the viability of using mixed trimmed IA/AA and other AA methods for studying reproducibility in unstable spiking neural network simulations.


Sign in / Sign up

Export Citation Format

Share Document