Optimal Space and Time Complexity Analysis on the Lattice of Cuboids Using Galois Connections for Data Warehousing

Author(s):  
Soumya Sen ◽  
Nabendu Chaki ◽  
Agostino Cortesi
2021 ◽  
pp. 146808742110397
Author(s):  
Haotian Chen ◽  
Kun Zhang ◽  
Kangyao Deng ◽  
Yi Cui

Real-time simulation models play an important role in the development of engine control systems. The mean value model (MVM) meets real-time requirements but has limited accuracy. By contrast, a crank-angle resolved model, such as the filling -and-empty model, can be used to simulate engine performance with high accuracy but cannot meet real-time requirements. Time complexity analysis is used to develop a real-time crank-angle resolved model with high accuracy in this study. A method used in computer science, program static analysis, is used to theoretically determine the computational time for a multicylinder engine filling-and-empty (crank-angle resolved) model. Then, a prediction formula for the engine cycle simulation time is obtained and verified by a program run test. The influence of the time step, program structure, algorithm and hardware on the cycle simulation time are analyzed systematically. The multicylinder phase shift method and a fast calculation method for the turbocharger characteristics are used to improve the crank-angle resolved filling-and-empty model to meet real-time requirements. The improved model meets the real-time requirement, and the real-time factor is improved by 3.04 times. A performance simulation for a high-power medium-speed diesel engine shows that the improved model has a max error of 5.76% and a real-time factor of 3.93, which meets the requirement for a hardware-in-the-loop (HIL) simulation during control system development.


Generally, classification accuracy is very important to gene processing and selection and cancer classification. It is needed to achieve better cancer treatments and improve medical drug assignments. However, the time complexity analysis will enhance the application's significance. To answer the research questions in Chapter 1, several case studies have been implemented (see Chapters 4 and 5), each was essential to sustain the methodologies discussed in Chapter 3. The study used a colon-cancer dataset comprising 2000 genes. The best search algorithm, GA, showed high performance with a good efficient time complexity. However, both DTs and SVMs showed the best classification contribution with reference to performance accuracy and time efficiency. However, it is difficult to apply a completely fair comparative study because existing algorithms and methods were tested by different authors to reflect the effectiveness and powerful of their own methods.


2020 ◽  
Vol 30 (6) ◽  
pp. 1239-1255
Author(s):  
Merlin Carl

Abstract We consider notions of space by Winter [21, 22]. We answer several open questions about these notions, among them whether low space complexity implies low time complexity (it does not) and whether one of the equalities P=PSPACE, P$_{+}=$PSPACE$_{+}$ and P$_{++}=$PSPACE$_{++}$ holds for ITTMs (all three are false). We also show various separation results between space complexity classes for ITTMs. This considerably expands our earlier observations on the topic in Section 7.2.2 of Carl (2019, Ordinal Computability: An Introduction to Infinitary Machines), which appear here as Lemma $6$ up to Corollary $9$.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 97
Author(s):  
Antoine Genitrini ◽  
Martin Pépin

In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is the lexicographic order, which corresponds to the familiar word ordering in the dictionary. In this article, we propose a comparative study of four algorithms dedicated to the lexicographic unranking of combinations, including three algorithms that were introduced decades ago. We start the paper with the introduction of our new algorithm using a new strategy of computations based on the classical factorial numeral system (or factoradics). Then, we present, in a high level, the three other algorithms. For each case, we analyze its time complexity on average, within a uniform framework, and describe its strengths and weaknesses. For about 20 years, such algorithms have been implemented using big integer arithmetic rather than bounded integer arithmetic which makes the cost of computing some coefficients higher than previously stated. We propose improvements for all implementations, which take this fact into account, and we give a detailed complexity analysis, which is validated by an experimental analysis. Finally, we show that, even if the algorithms are based on different strategies, all are doing very similar computations. Lastly, we extend our approach to the unranking of other classical combinatorial objects such as families counted by multinomial coefficients and k-permutations.


2012 ◽  
Vol 23 (07) ◽  
pp. 1451-1464 ◽  
Author(s):  
AMIR M. BEN-AMRAM ◽  
LARS KRISTIANSEN

We investigate the decidability of the feasibility problem for imperative programs with bounded loops. A program is called feasible if all values it computes are polynomially bounded in terms of the input. The feasibility problem is representative of a group of related properties, like that of polynomial time complexity. It is well known that such properties are undecidable for a Turing-complete programming language. They may be decidable, however, for languages that are not Turing-complete. But if these languages are expressive enough, they do pose a challenge for analysis. We are interested in tracing the edge of decidability for the feasibility problem and similar problems. In previous work, we proved that such problems are decidable for a language where loops are bounded but indefinite (that is, the loops may exit before completing the given iteration count). In this paper, we consider definite loops. A second language feature that we vary, is the kind of assignment statements. With ordinary assignment, we prove undecidability of a very tiny language fragment. We also prove undecidability with lossy assignment (that is, assignments where the modified variable may receive any value bounded by the given expression, even zero). But we prove decidability with max assignments (that is, assignments where the modified variable never decreases its value).


2004 ◽  
Vol 14 (6) ◽  
pp. 669-680
Author(s):  
PETER LJUNGLÖF

This paper implements a simple and elegant version of bottom-up Kilbury chart parsing (Kilbury, 1985; Wirén, 1992). This is one of the many chart parsing variants, which are all based on the data structure of charts. The chart parsing process uses inference rules to add new edges to the chart, and parsing is complete when no further edges can be added. One novel aspect of this implementation is that it doesn't have to rely on a global state for the implementation of the chart. This makes the code clean, elegant and declarative, while still having the same space and time complexity as the standard imperative implementations.


1996 ◽  
Vol 11 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Johann Blieberger ◽  
Roland Lieger

2014 ◽  
Vol 19 (6) ◽  
pp. 1611-1625 ◽  
Author(s):  
Xin Du ◽  
Youcong Ni ◽  
Datong Xie ◽  
Xin Yao ◽  
Peng Ye ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document