time complexity
Recently Published Documents


TOTAL DOCUMENTS

1483
(FIVE YEARS 448)

H-INDEX

38
(FIVE YEARS 6)

2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Luiz Augusto G. Silva ◽  
Luis Antonio B. Kowada ◽  
Noraí Romeu Rocco ◽  
Maria Emília M. T. Walter

Abstract Background sorting by transpositions (SBT) is a classical problem in genome rearrangements. In 2012, SBT was proven to be $$\mathcal {NP}$$ NP -hard and the best approximation algorithm with a 1.375 ratio was proposed in 2006 by Elias and Hartman (EH algorithm). Their algorithm employs simplification, a technique used to transform an input permutation $$\pi$$ π into a simple permutation$${\hat{\pi }}$$ π ^ , presumably easier to handle with. The permutation $${\hat{\pi }}$$ π ^ is obtained by inserting new symbols into $$\pi$$ π in a way that the lower bound of the transposition distance of $$\pi$$ π is kept on $${\hat{\pi }}$$ π ^ . The simplification is guaranteed to keep the lower bound, not the transposition distance. A sequence of operations sorting $${\hat{\pi }}$$ π ^ can be mimicked to sort $$\pi$$ π . Results and conclusions First, using an algebraic approach, we propose a new upper bound for the transposition distance, which holds for all $$S_n$$ S n . Next, motivated by a problem identified in the EH algorithm, which causes it, in scenarios involving how the input permutation is simplified, to require one extra transposition above the 1.375-approximation ratio, we propose a new approximation algorithm to solve SBT ensuring the 1.375-approximation ratio for all $$S_n$$ S n . We implemented our algorithm and EH’s. Regarding the implementation of the EH algorithm, two other issues were identified and needed to be fixed. We tested both algorithms against all permutations of size n, $$2\le n \le 12$$ 2 ≤ n ≤ 12 . The results show that the EH algorithm exceeds the approximation ratio of 1.375 for permutations with a size greater than 7. The percentage of computed distances that are equal to transposition distance, computed by the implemented algorithms are also compared with others available in the literature. Finally, we investigate the performance of both implementations on longer permutations of maximum length 500. From the experiments, we conclude that maximum and the average distances computed by our algorithm are a little better than the ones computed by the EH algorithm and the running times of both algorithms are similar, despite the time complexity of our algorithm being higher.


Author(s):  
Mehrnoosh Bazrafkan

The numerous different mathematical methods used to solve pattern recognition snags may be assembled into two universal approaches: the decision-theoretic approach and the syntactic(structural) approach. In this paper, at first syntactic pattern recognition method and formal grammars are described and then has been investigated one of the techniques in syntactic pattern recognition called top – down tabular parser known as Earley’s algorithm Earley's tabular parser is one of the methods of context -free grammar parsing for syntactic pattern recognition. Earley's algorithm uses array data structure for implementing, which is the main problem and for this reason takes a lots of time, searching in array and grammar parsing, and wasting lots of memory. In order to solve these problems and most important, the cubic time complexity, in this article, a new algorithm has been introduced, which reduces wasting the memory to zero, with using linked list data structure. Also, with the changes in the implementation and performance of the algorithm, cubic time complexity has transformed into O (n*R) order. Key words: syntactic pattern recognition, tabular parser, context –free grammar, time complexity, linked list data structure.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012028
Author(s):  
Karamjeet Kaur ◽  
Sudeshna Chakraborty ◽  
Manoj Kumar Gupta

Abstract In bioinformatics, sequence alignment is very important task to compare and find similarity between biological sequences. Smith Waterman algorithm is most widely used for alignment process but it has quadratic time complexity. This algorithm is using sequential approach so if the no. of biological sequences is increasing then it takes too much time to align sequences. In this paper, parallel approach of Smith Waterman algorithm is proposed and implemented according to the architecture of graphic processing unit using CUDA in which features of GPU is combined with CPU in such a way that alignment process is three times faster than sequential implementation of Smith Waterman algorithm and helps in accelerating the performance of sequence alignment using GPU. This paper describes the parallel implementation of sequence alignment using GPU and this intra-task parallelization strategy reduces the execution time. The results show significant runtime savings on GPU.


2022 ◽  
pp. 37-59
Author(s):  
Arun Kumar G. Hiremath ◽  
Roopa G. M. ◽  
Naveen Kumar K. R.

Proving ownership of the land should preferably be done with a legal document that proves it decisively. Many authorities retain various documents, any of which could be used to assert a claim on the land. To prevent document falsification, the land administration mechanism ought to be robust, accessible at all times, and quick to accomplish exercises. But, any such solutions are prone to a slew of issues, including data accuracy, security, and dispute resolution. Usage of blockchain technology in land administration is considerably advanced to solve issues that plague current LAS. With the adoption of blockchain, the problem of cooperation among a variety of land records is articulated. The proposed model has integrated units to digitally sign the land assets to store securely into the blockchain using cryptography algorithms after which land assets are verified. The proposed approach eliminates deception, improves administration. The results show that the time complexity for registering, signing, and verifying land facts to establish a system using blockchain is relatively secure.


Author(s):  
Atichart Sinsongsuk ◽  
Thapana Boonchoo ◽  
Wanida Putthividhya

Map matching deals with matching GPS coordinates to corresponding points or segments on a road network map. The work has various applications in both vehicle navigating and tracking domains. Traditional rule-based approach for solving the Map matching problem yielded great matching results. However, its performance depends on the underlying algorithm and Mathematical/Statistical models employed in the approach. For example, HMM Map Matching yielded O(N2) time complexity, where N is the number of states in the underlying Hidden Markov Model. Map matching techniques with large order of time complexity are impractical for providing services, especially within time-sensitive applications. This is due to their slow responsiveness and the critical amount of computing power required to obtain the results. This paper proposed a novel data-driven approach for projecting GPS trajectory onto a road network. We constructed a supervised-learning classifier using the Multi-Label Classification (MLC) technique and HMM Map Matching results. Analytically, our approach yields O(N) time complexity, suggesting that the approach has a better running performance when applied to the Map matching-based applications in which the response time is the major concern. In addition, our experimental results indicated that we could achieve Jaccard Similarity index of 0.30 and Overlap Coefficient of 0.70.


Author(s):  
Marco A. López-Medina ◽  
J. Raymundo Marcial-Romero ◽  
Guillermo De Ita Luna ◽  
José A. Hernández

We present a novel algorithm based on combinatorial operations on lists for computing the number of models on two conjunctive normal form Boolean formulas whose restricted graph is represented by a grid graph Gm,n. We show that our algorithm is correct and its time complexity is O ( t · 1 . 618 t + 2 + t · 1 . 618 2 t + 4 ) , where t = n · m is the total number of vertices in the graph. For this class of formulas, we show that our proposal improves the asymptotic behavior of the time-complexity with respect of the current leader algorithm for counting models on two conjunctive form formulas of this kind.


Author(s):  
Shesagiri Taminana ◽  
◽  
Lalitha Bhaskari ◽  
Arwa Mashat ◽  
Dragan Pamučar ◽  
...  

With the Present days increasing demand for the higher performance with the application developers have started considering cloud computing and cloud-based data centres as one of the prime options for hosting the application. Number of parallel research outcomes have for making a data centre secure, the data centre infrastructure must go through the auditing process. During the auditing process, auditors can access VMs, applications and data deployed on the virtual machines. The downside of the data in the VMs can be highly sensitive and during the process of audits, it is highly complex to permits based on the requests and can increase the total time taken to complete the tasks. Henceforth, the demand for the selective and adaptive auditing is the need of the current research. However, these outcomes are criticised for higher time complexity and less accuracy. Thus, this work proposes a predictive method for analysing the characteristics of the VM applications and the characteristics from the auditors and finally granting the access to the virtual machine by building a predictive regression model. The proposed algorithm demonstrates 50% of less time complexity to the other parallel research for making the cloud-based application development industry a safer and faster place.


Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 362
Author(s):  
Priyanka Mukhopadhyay

In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in ℓp norm (1≤p≤∞). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all ℓp norm (1≤p≤∞). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of 22.751n+o(n), which is much less than the 23.849n+o(n) complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the ℓ2 norm, where we achieve a time complexity of 22.25n+o(n), while the List Sieve Birthday algorithm has a running time of 22.465n+o(n). We adopt our sieving techniques to approximation algorithms for SVP and CVP in ℓp norm (1≤p≤∞) and show that our algorithm has a running time of 22.001n+o(n), while previous algorithms have a time complexity of 23.169n+o(n).


Sign in / Sign up

Export Citation Format

Share Document