scholarly journals Running Time Complexity of Printing an Acyclic Automaton

Author(s):  
Franck Guingne ◽  
André Kempe ◽  
Florent Nicart
Keyword(s):  
2017 ◽  
Vol 43 (3) ◽  
pp. 465-520 ◽  
Author(s):  
Kilian Gebhardt ◽  
Mark-Jan Nederhof ◽  
Heiko Vogler

We explore the concept of hybrid grammars, which formalize and generalize a range of existing frameworks for dealing with discontinuous syntactic structures. Covered are both discontinuous phrase structures and non-projective dependency structures. Technically, hybrid grammars are related to synchronous grammars, where one grammar component generates linear structures and another generates hierarchical structures. By coupling lexical elements of both components together, discontinuous structures result. Several types of hybrid grammars are characterized. We also discuss grammar induction from treebanks. The main advantage over existing frameworks is the ability of hybrid grammars to separate discontinuity of the desired structures from time complexity of parsing. This permits exploration of a large variety of parsing algorithms for discontinuous structures, with different properties. This is confirmed by the reported experimental results, which show a wide variety of running time, accuracy, and frequency of parse failures.


Author(s):  
Subandijo Subandijo

Efficiency or the running time of an algorithm is usually calculated with time complexity or space complexity as a function of various inputs. It is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Brute-force algorithm is the easiest way to calculate the performance of the algorithm. However, it is not recommended since it does not sufficiently explain the efficiency of the algorithm. Asymptotic estimaties are used because different implementations of the same algorithm may differ in efficiency. The big-O notation is used to generate the estimation. 


2001 ◽  
Vol 8 (8) ◽  
Author(s):  
Ulrik Frendrup ◽  
Jesper Nyholm Jensen

<p>This paper deals with algorithmic checking of open bisimilarity in the pi-calculus. Most bisimulation checking algorithms are based on the partition refinement approach. Unfortunately the definition of open bisimulation does not permit us to use a partition refinement approach for open bisimulation checking directly, but in the paper 'A Partition Refinement Algorithm for the pi-Calculus' Marco Pistore and Davide Sangiorgi present an iterative method that makes it possible to check for open bisimilarity using partition refinement. We have implemented the algorithm presented by Marco Pistore and Davide Sangiorgi. Furthermore,<br />we have optimized this algorithm and implemented this optimized algorithm. The time-complexity of this algorithm is the same as the time-complexity for the first algorithm, but performance tests have shown that in many cases the running time of the optimized algorithm is shorter than the running time of the first algorithm. Our implementation of the optimized open bisimulation checker algorithm and a user interface have been integrated in a system called the OBC Workbench.The source code and a manual for it is available from http://www.cs.auc.dk/research/FS/ny/PR-pi/.</p>


2015 ◽  
Vol 12 (1) ◽  
pp. 45-61 ◽  
Author(s):  
Chao Zhao ◽  
Huiqiang Wang ◽  
Junyu Lin ◽  
Hongwu Lv ◽  
Yushu Zhang

Analyzing attack graphs can provide network security hardening strategies for administrators. Concerning the problems of high time complexity and costly hardening strategies in previous methods, a method for generating low cost network security hardening strategies is proposed based on attack graphs. The authors' method assesses risks of attack paths according to path length and the common vulnerability scoring system, limits search scope with a threshold to reduce the time complexity, and lowers cost of hardening strategies by using a heuristic algorithm. The experimental results show that the authors' method has good scalability, and significantly reduces cost of network security hardening strategies with reasonable running time.


Data sorting hasmany advantages and applications in software and web development. Search engines use sorting techniques to sorttheresult before itispresented totheuser.Thewordsinadictionary are insorted ordersothatthewords canbe found easily.There aremanysorting algorithms that areused in many domains to perform some operation and obtain the desired output. But there are some sorting algorithms that take large time in sorting the data. This huge time can be vulnerable to the operation. Every sorting algorithm has the different sorting technique to sort the given data, Stooge sort is asorting algorithm which sorts the data recursively. Stooge sort takes comparatively more time as compared tomany othersorting algorithms.Stooge sortworks recursively to sort the data element but the Optimized Stooge sort does not use recursive process. In this paper, we propose Optimized Stooge sort to reduce the time complexity of the Stooge sort. The running time of Optimized Stooge sort is very much reduced as compared to theStooge sort algorithm. The existing researchfocuses onreducing therunning time of Stooge sort. Our results show that the Optimized Stooge sort is faster than the Stooge sort algorithm.


2017 ◽  
Author(s):  
MohammadJavad Rezaei Seraji ◽  
Seyed Abolfazl Motahari

AbstractSureMap is a versatile, error tolerant and high sensitive read mapper which is able to map “difficult” reads, those requiring many edit operations to be mapped to the reference genome, with acceptable time complexity. Mapping real datasets reveal that many variants unidentifiable by other mappers can be called using Suremap. Moreover SureMap has a very good running time and accuracy in aligning very long and noisy reads like PacBio and Nanopore against reference genome.


Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 362
Author(s):  
Priyanka Mukhopadhyay

In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in ℓp norm (1≤p≤∞). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all ℓp norm (1≤p≤∞). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of 22.751n+o(n), which is much less than the 23.849n+o(n) complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the ℓ2 norm, where we achieve a time complexity of 22.25n+o(n), while the List Sieve Birthday algorithm has a running time of 22.465n+o(n). We adopt our sieving techniques to approximation algorithms for SVP and CVP in ℓp norm (1≤p≤∞) and show that our algorithm has a running time of 22.001n+o(n), while previous algorithms have a time complexity of 23.169n+o(n).


2017 ◽  
Author(s):  
Jiaan Dai ◽  
Wei Jiang ◽  
Fengchao Yu ◽  
Weichuan Yu

AbstractMotivationCross-linking technique coupled with mass spectrometry (MS) is widely used in the analysis of protein structures and protein-protein interactions. In order to identify cross-linked peptides from MS data, we need to consider all pairwise combinations of peptides, which is computationally prohibitive when the sequence database is large. To alleviate this problem, some heuristic screening strategies are used to reduce the number of peptide pairs during the identification. However, heuristic screening criteria may ignore true findings.ResultsWe directly tackle the combination challenge without using any screening strategies. With the additive scoring function and the data structure of double-ended queue, the proposed algorithm reduces the quadratic time complexity of exhaustive searching down to the linear time complexity. We implement the algorithm in a tool named Xolik, and the running time of Xolik is validated using databases with different number of proteins. Experiments using synthetic and empirical datasets show that Xolik outperforms existing tools in terms of running time and statistical power.AvailabilitySource code and binaries of Xolik are freely available at http://bioinformatics.ust.hk/[email protected] informationSupplementary data are available at Bioinformatics online.


Author(s):  
Heru Ismanto ◽  
Retantyo Wardoyo

Developing a sustainable activity needs a good plan, so the programs can be effective and have a clear objective. Therefore, a model to help the analysis is significantly needed in determining the priority area to conduct better development in the future. This research applies the concept of Klassen Typology to analyze PDRB data in Papua Province. Based on the result of using Klassen typology analysis method, there are 4 (four) quadrants of area classification in Papua Province. Twenty nine regencies were analyzed based on PDRB data to investigate which area can be used as the development of priority area in the future. The method used in this study is C4.5 and K-Nearest Neighbor. Time complexity becomes test standard of a particular algorithm to get efficient execution time when it is implemented into programming language. The approach of asymptotic analysis using the concept of Big-O is one of the techniques that is usually used to test time complexity of an algorithm. Based on the test result of both methods, it shows that the result of running time of KNN is more stable than of C4.5 although the analysis of Big-O gives the same complexity.


Author(s):  
Herman Schubert ◽  
Jasper J. van de Gronde ◽  
Jos B. T. M. Roerdink

AbstractPath openings are morphological operators that are used to preserve long, thin, and curved structures in images. They have the ability to adapt to local image structures,which allows them to detect lines that are not perfectly straight. They are applicable in extracting cracks, roads, and similar structures. Although path openings are very efficient to implement for binary images, the greyscale case is more problematic. This study provides an analysis of the main existing greyscale algorithm, and shows that although its time complexity can be quadratic in the number of pixels, this is optimal in terms of the output (if the full opening transform is created). Also, it is shown that under many circumstances the worst-case running time is much less than quadratic. Finally, a new algorithm is provided,which has the same time complexity, but is simpler, faster in practice and more amenable to parallelization


Sign in / Sign up

Export Citation Format

Share Document