Parallel Disassembly by Onion Peeling

1997 ◽  
Vol 119 (2) ◽  
pp. 267-274 ◽  
Author(s):  
Shiang-Fong Chen ◽  
J. H. Oliver ◽  
Shuo-Yan Chou ◽  
Lin-Lin Chen

For some assembly structures, parallel disassembly of components is necessary in order to reach a particular internal component. Due to the large number of possible combinations, the parallel disassembly problem is not easily solved in a general form. In order to reduce the time complexity of finding a disassembly sequence, this paper introduces a simplified mating graph and develops a data structure to facilitate an efficient parallel disassembly algorithm. This algorithm takes Max {O(N3), O(E)} time to find an efficient sequence to reach a particular component, where N is the number of components and E is the number of mating faces. Separability testing is incorporated to determine whether the query component can be disassembled and moved to infinity without obstruction.

Author(s):  
Shiang-Fong Chen ◽  
James H. Oliver ◽  
Shuo-Yan Chou ◽  
Lin-Lin Chen

Abstract For some assembly structures, parallel disassembly of components is necessary in order to reach a particular internal component. Due to the large number of possible combinations, the parallel disassembly problem is not easily solved in a general form. This paper addresses parallel disassembly via geometrical reasoning and presents an algorithm for finding a sequence to disassemble a structure by an onion peeling procedure, from outside to inside. This algorithm takes Max{O(N3), O(E)} time to find an efficient sequence to reach a particular component, where N is the number of components and E is the number of mating faces. In this algorithm, the disassemblability of various combinations of components are determined by traversing the mating graph of the structure and testing the monotonicity of paths in the graph. Separability testing is incorporated to determine whether the query components can be disassembled and moved to infinity without obstruction.


2018 ◽  
Vol 27 (14) ◽  
pp. 1850218
Author(s):  
Mustafa Aksu ◽  
Ali Karcı

Our new algorithm and data structure, pyramid search (PS) and skip ring, were created with the help of circular linked list and skip list algorithms and data structures. In circular linked list, operations were performed on a single circular list. Our new data structure consists of circular linked lists formed in layers which were linked in a pyramid way. Time complexity of searching, insertion and deletion algorithms equal to [Formula: see text] (lg[Formula: see text]) in an [Formula: see text]-element skip ring data structure. Therefore, skip ring data structure is employed more effectively ([Formula: see text](lg[Formula: see text])) in circumstances where circular linked lists ([Formula: see text]) are used. The priority is determined based on the searching frequency in PS which was developed in this study. Thus, the time complexity of searching is almost [Formula: see text](1) for [Formula: see text] records data set. In this paper, the applications of searching algorithms like linear search (LS), binary search (BS) and PS were realized and the obtained results were compared. The obtained results demonstrated that the PS algorithm is superior to the BS algorithm.


2011 ◽  
Vol 03 (01n02) ◽  
pp. 167-186 ◽  
Author(s):  
YING JIANG ◽  
DONG MAO ◽  
YUESHENG XU

Sample entropy is a widely used tool for quantifying complexity of a biological system. Computing sample entropy directly using its definition requires large computational costs. We propose a fast algorithm based on a k-d tree data structure for computing sample entropy. We prove that the time complexity of the proposed algorithm is [Formula: see text] and its space complexity is O(N log N), where N is the length of the input time series and m is the length of its pattern templates. We present a numerical experiment that demonstrates significant improvement of the proposed algorithm in computing time.


2004 ◽  
Vol 14 (6) ◽  
pp. 669-680
Author(s):  
PETER LJUNGLÖF

This paper implements a simple and elegant version of bottom-up Kilbury chart parsing (Kilbury, 1985; Wirén, 1992). This is one of the many chart parsing variants, which are all based on the data structure of charts. The chart parsing process uses inference rules to add new edges to the chart, and parsing is complete when no further edges can be added. One novel aspect of this implementation is that it doesn't have to rely on a global state for the implementation of the chart. This makes the code clean, elegant and declarative, while still having the same space and time complexity as the standard imperative implementations.


2020 ◽  
Author(s):  
Ahsan Sanaullah ◽  
Degui Zhi ◽  
Shaojie Zhang

AbstractDurbin’s PBWT, a scalable data structure for haplotype matching, has been successfully applied to identical by descent (IBD) segment identification and genotype imputation. Once the PBWT of a haplotype panel is constructed, it supports efficient retrieval of all shared long segments among all individuals (long matches) and efficient query between an external haplotype and the panel. However, the standard PBWT is an array-based static data structure and does not support dynamic updates of the panel. Here, we generalize the static PBWT to a dynamic data structure, d-PBWT, where the reverse prefix sorting at each position is represented by linked lists. We developed efficient algorithms for insertion and deletion of individual haplotypes. In addition, we verified that d-PBWT can support all algorithms of PBWT. In doing so, we systematically investigated variations of set maximal match and long match query algorithms: while they all have average case time complexity independent of database size, they have different worst case complexities, linear time complexity with the size of the genome, and dependency on additional data structures.


2012 ◽  
Vol 263-266 ◽  
pp. 1398-1401
Author(s):  
Song Feng Lu ◽  
Hua Zhao

Document retrieval is the basic task of search engines, and seize amount of attention by the pattern matching community. In this paper, we focused on the dynamic version of this problem, in which the text insertion and deletion is allowable. By using the generalized suffix array and other data structure, we proposed a new index structure. Our scheme achieved better time complexity than the existing ones, and a bit more space overhead is needed as return.


2012 ◽  
Vol 23 (02) ◽  
pp. 357-374 ◽  
Author(s):  
PÉTER BURCSI ◽  
FERDINANDO CICALESE ◽  
GABRIELE FICI ◽  
ZSUZSANNA LIPTÁK

The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = {a1, …, aσ} is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = |{j | sj = ai}|. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of [Formula: see text], where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to [Formula: see text], i.e., by a factor of log m.


2020 ◽  
Vol 8 (5) ◽  
pp. 1272-1276

An algorithm is a clear specification of a sequence of instructions which when followed, provides a solution to a given problem. Writing an algorithm, depends upon various parameters, which leads to strong algorithmic performance, in terms of its computational efficiency and solution quality. This research paper presents the different methodologies of writing algorithms of data structure and also provides their performance analysis with respect to time complexity and space complexity. As we know that, for the same problem, we will have different algorithms, written using different approaches. All approaches of algorithms are important and have been an area of focus for a long time but still the question remains the same “which to use when?”, which is the main reason to perform this research. This research provides a detailed study of how algorithms being written using different approaches work and then compares them on the basis of various parameters such as time complexity and space complexity, to reach the conclusion.


Sign in / Sign up

Export Citation Format

Share Document