Linear time complexity GF(256) RaptorQ implementation on GPU

Author(s):  
Sunwoong Joo
Keyword(s):  
Author(s):  
Nirmal K. Nair ◽  
James H. Oliver

Abstract An efficient algorithm is presented to determine the blank shape necessary to manufacture a surface by press forming. The technique is independent of material properties and instead uses surface geometry and an area conservation constraint to generate a geometrically feasible blank shape. The algorithm is formulated as an approximate geometric interpretation of the reversal of the forming process. The primary applications for this technique are in preliminary surface design, assessment of manufacturability, and location of binder wrap. Since the algorithm exhibits linear time complexity, it is amenable to implementation as an interactive design aid. The algorithm is applied to two example surfaces and the results are discussed.


Author(s):  
Mikhail Krechetov ◽  
Jakub Marecek ◽  
Yury Maximov ◽  
Martin Takac

Low-rank methods for semi-definite programming (SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schatten-norm penalties, which are difficult to implement in practice due to high computational efforts. In this paper, we propose Entropy-Penalized Semi-Definite Programming (EP-SDP), which provides a unified framework for a broad class of penalty functions used in practice to promote a low-rank solution. We show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it useful for many machine learning and optimization problems. We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems.


2020 ◽  
Vol 37 (06) ◽  
pp. 2050034
Author(s):  
Ali Reza Sepasian ◽  
Javad Tayyebi

This paper studies two types of reverse 1-center problems under uniform linear cost function where edge lengths are allowed to reduce. In the first type, the aim is that the objective value is bounded by a prescribed fixed value [Formula: see text] at minimum cost. The aim of the other is to improve the objective value as much as possible within a given budget. An algorithm based on dynamic programming is proposed to solve the first problem in linear time. Then, this algorithm is applied as a subroutine to design an algorithm to solve the second type of the problem in [Formula: see text] time in which [Formula: see text] is a fixed number dependent on the problem parameters. Under the similarity assumption, this algorithm has a better complexity than the Nguyen algorithm (2013) with quadratic-time complexity. Some numerical experiments are conducted to validate this fact in practice.


2020 ◽  
Vol 34 (05) ◽  
pp. 8319-8326
Author(s):  
Zuchao Li ◽  
Hai Zhao ◽  
Kevin Parnow

Most syntactic dependency parsing models may fall into one of two categories: transition- and graph-based models. The former models enjoy high inference efficiency with linear time complexity, but they rely on the stacking or re-ranking of partially-built parse trees to build a complete parse tree and are stuck with slower training for the necessity of dynamic oracle training. The latter, graph-based models, may boast better performance but are unfortunately marred by polynomial time inference. In this paper, we propose a novel parsing order objective, resulting in a novel dependency parsing model capable of both global (in sentence scope) feature extraction as in graph models and linear time inference as in transitional models. The proposed global greedy parser only uses two arc-building actions, left and right arcs, for projective parsing. When equipped with two extra non-projective arc-building actions, the proposed parser may also smoothly support non-projective parsing. Using multiple benchmark treebanks, including the Penn Treebank (PTB), the CoNLL-X treebanks, and the Universal Dependency Treebanks, we evaluate our parser and demonstrate that the proposed novel parser achieves good performance with faster training and decoding.


Author(s):  
Ercan Canhasi

Text modeling and sentence selection are the fundamental steps of a typical extractive document summarization algorithm.   The common text modeling method connects a pair of sentences based on their similarities.   Even thought it can effectively represent the sentence similarity graph of given document(s) its big drawback is a large time complexity of $O(n^2)$, where n represents the number of sentences.   The quadratic time complexity makes it impractical for large documents.   In this paper we propose the fast approximation algorithms for the text modeling and the sentence selection.   Our text modeling algorithm reduces the time complexity to near-linear time by rapidly finding the most similar sentences to form the sentences similarity graph.   In doing so we utilized Locality-Sensitive Hashing, a fast algorithm for the approximate nearest neighbor search.   For the sentence selection step we propose a simple memory-access-efficient node ranking method based on the idea of scanning sequentially only the neighborhood arrays.    Experimentally, we show that sacrificing a rather small percentage of recall and precision in the quality of the produced summary can reduce the quadratic to sub-linear time complexity.   We see the big potential of proposed method in text summarization for mobile devices and big text data summarization for internet of things on cloud.   In our experiments, beside evaluating the presented method on the standard general and query multi-document summarization tasks, we also tested it on few alternative summarization tasks including general and query, timeline, and comparative summarization.


2009 ◽  
Vol 2009 ◽  
pp. 1-7 ◽  
Author(s):  
Hanli Zhao ◽  
Xiaogang Jin ◽  
Jianbing Shen ◽  
Shufang Lu

Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects) are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.


2011 ◽  
Vol 22 (02) ◽  
pp. 395-409 ◽  
Author(s):  
HOLGER PETERSEN

We investigate the efficiency of simulations of storages by several counters. A simulation of a pushdown store is described which is optimal in the sense that reducing the number of counters of a simulator leads to an increase in time complexity. The lower bound also establishes a tight counter hierarchy in exponential time. Then we turn to simulations of a set of counters by a different number of counters. We improve and generalize a known simulation in polynomial time. Greibach has shown that adding s + 1 counters increases the power of machines working in time ns. Using a new family of languages we show here a tight hierarchy result for machines with the same polynomial time-bound. We also prove hierarchies for machines with a fixed number of counters and with growing polynomial time-bounds. For machines with one counter and an additional "store zero" instruction we establish the equivalence of real-time and linear time. If at least two counters are available, the classes of languages accepted in real-time and linear time can be separated.


1993 ◽  
Vol 24 (3) ◽  
pp. 5-15 ◽  
Author(s):  
Kenneth W. Regan
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document