scholarly journals A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions

Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 612 ◽  
Author(s):  
Hector Zenil

Some established and also novel techniques in the field of applications of algorithmic (Kolmogorov) complexity currently co-exist for the first time and are here reviewed, ranging from dominant ones such as statistical lossless compression to newer approaches that advance, complement and also pose new challenges and may exhibit their own limitations. Evidence suggesting that these different methods complement each other for different regimes is presented and despite their many challenges, some of these methods can be better motivated by and better grounded in the principles of algorithmic information theory. It will be explained how different approaches to algorithmic complexity can explore the relaxation of different necessary and sufficient conditions in their pursuit of numerical applicability, with some of these approaches entailing greater risks than others in exchange for greater relevance. We conclude with a discussion of possible directions that may or should be taken into consideration to advance the field and encourage methodological innovation, but more importantly, to contribute to scientific discovery. This paper also serves as a rebuttal of claims made in a previously published minireview by another author, and offers an alternative account.

1993 ◽  
Vol 18 (2-4) ◽  
pp. 129-149
Author(s):  
Serge Garlatti

Representation systems based on inheritance networks are founded on the hierarchical structure of knowledge. Such representation is composed of a set of objects and a set of is-a links between nodes. Objects are generally defined by means of a set of properties. An inheritance mechanism enables us to share properties across the hierarchy, called an inheritance graph. It is often difficult, even impossible to define classes by means of a set of necessary and sufficient conditions. For this reason, exceptions must be allowed and they induce nonmonotonic reasoning. Many researchers have used default logic to give them formal semantics and to define sound inferences. In this paper, we propose a survey of the different models of nonmonotonic inheritance systems by means of default logic. A comparison between default theories and inheritance mechanisms is made. In conclusion, the ability of default logic to take some inheritance mechanisms into account is discussed.


1998 ◽  
Vol 30 (1) ◽  
pp. 181-196 ◽  
Author(s):  
P. S. Griffin ◽  
R. A. Maller

Let Tr be the first time at which a random walk Sn escapes from the strip [-r,r], and let |STr|-r be the overshoot of the boundary of the strip. We investigate the order of magnitude of the overshoot, as r → ∞, by providing necessary and sufficient conditions for the ‘stability’ of |STr|, by which we mean that |STr|/r converges to 1, either in probability (weakly) or almost surely (strongly), as r → ∞. These also turn out to be equivalent to requiring only the boundedness of |STr|/r, rather than its convergence to 1, either in the weak or strong sense, as r → ∞. The almost sure characterisation turns out to be extremely simple to state and to apply: we have |STr|/r → 1 a.s. if and only if EX2 < ∞ and EX = 0 or 0 < |EX| ≤ E|X| < ∞. Proving this requires establishing the equivalence of the stability of STr with certain dominance properties of the maximum partial sum Sn* = max{|Sj|: 1 ≤ j ≤ n} over its maximal increment.


2015 ◽  
Vol 1 ◽  
pp. e23 ◽  
Author(s):  
Hector Zenil ◽  
Fernando Soler-Toscano ◽  
Jean-Paul Delahaye ◽  
Nicolas Gauvrit

We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.


2019 ◽  
Vol 35 (01) ◽  
pp. 1950347
Author(s):  
Nirupam Dutta ◽  
Anirban Dey ◽  
Prasanta K. Panigrahi

In this paper, for the first time in the context of time orbiting potential (TOP) trap, the necessary and sufficient conditions for the adiabatic evolution of weak field seeking states have been quantitatively examined. It has been well accepted since decades that adiabaticity has to be obeyed by the atoms for successful magnetic trapping. However, we show, on the contrary, that atoms can also be confined beyond the adiabatic limit. For the demonstration, we have considered a toy model of a single weak field seeking atom in its ground state and have calculated its survival probability inside a TOP trap. Our findings open new possibilities to relax the restrictions of atom trapping in laboratories.


1998 ◽  
Vol 30 (01) ◽  
pp. 181-196 ◽  
Author(s):  
P. S. Griffin ◽  
R. A. Maller

Let T r be the first time at which a random walk S n escapes from the strip [-r,r], and let |S T r |-r be the overshoot of the boundary of the strip. We investigate the order of magnitude of the overshoot, as r → ∞, by providing necessary and sufficient conditions for the ‘stability’ of |S T r |, by which we mean that |S T r |/r converges to 1, either in probability (weakly) or almost surely (strongly), as r → ∞. These also turn out to be equivalent to requiring only the boundedness of |S T r |/r, rather than its convergence to 1, either in the weak or strong sense, as r → ∞. The almost sure characterisation turns out to be extremely simple to state and to apply: we have |S T r |/r → 1 a.s. if and only if EX 2 &lt; ∞ and EX = 0 or 0 &lt; |EX| ≤ E|X| &lt; ∞. Proving this requires establishing the equivalence of the stability of S T r with certain dominance properties of the maximum partial sum S n * = max{|S j |: 1 ≤ j ≤ n} over its maximal increment.


10.37236/389 ◽  
2010 ◽  
Vol 17 (1) ◽  
Author(s):  
Po-Yi Huang ◽  
Jun Ma ◽  
Yeong-Nan Yeh

Let $\vec{r}=(r_i)_{i=1}^n$ be a sequence of real numbers of length $n$ with sum $s$. Let $s_0=0$ and $s_i=r_1+\ldots +r_i$ for every $i\in\{1,2,\ldots,n\}$. Fluctuation theory is the name given to that part of probability theory which deals with the fluctuations of the partial sums $s_i$. Define $p(\vec{r})$ to be the number of positive sum $s_i$ among $s_1,\ldots,s_n$ and $m(\vec{r})$ to be the smallest index $i$ with $s_i=\max\limits_{0\leq k\leq n}s_k$. An important problem in fluctuation theory is that of showing that in a random path the number of steps on the positive half-line has the same distribution as the index where the maximum is attained for the first time. In this paper, let $\vec{r}_i=(r_i,\ldots,r_n,r_1,\ldots,r_{i-1})$ be the $i$-th cyclic permutation of $\vec{r}$. For $s>0$, we give the necessary and sufficient conditions for $\{ m(\vec{r}_i)\mid 1\leq i\leq n\}=\{1,2,\ldots,n\}$ and $\{ p(\vec{r}_i)\mid 1\leq i\leq n\}=\{1,2,\ldots,n\}$; for $s\leq 0$, we give the necessary and sufficient conditions for $\{ m(\vec{r}_i)\mid 1\leq i\leq n\}=\{0,1,\ldots,n-1\}$ and $\{ p(\vec{r}_i)\mid 1\leq i\leq n\}=\{0,1,\ldots,n-1\}$. We also give an analogous result for the class of all permutations of $\vec{r}$.


Author(s):  
J. C. Trinkle ◽  
Stephen Berard ◽  
J. S. Pang

Two new instantaneous-time models for predicting the motion and contact forces of three-dimensional, quasistatic multi-rigid-body systems are developed; one linear and one nonlinear. The nonlinear characteristic is the result of retaining the usual quadratic friction cone in the model. Discrete-time versions of these models provide the first time-stepping methods for such systems. As a first step to understanding their usefulness in simulation and manipulation planning, a theorem defining the equivalence of solutions of a time-stepping method for the nonlinear model and a global optimal solution of a related convex optimization problem is given. In addition, a Proposition giving necessary and sufficient conditions for solution uniqueness of the nonlinear time-stepping method is given. Finally, a simple example is discussed to help develop intuition about quasistatic systems and to solidify the reader’s understanding of the theorem and proposition.


Author(s):  
John Franco ◽  
John Martin

This chapter traces the links between the notion of Satisfiability and the attempts by mathematicians, philosophers, engineers, and scientists over the last 2300 years to develop effective processes for emulating human reasoning and scientific discovery, and for assisting in the development of electronic computers and other electronic components. Satisfiability was present implicitly in the development of ancient logics such as Aristotle’s syllogistic logic, its extentions by the Stoics, and Lull’s diagrammatic logic of the medieval period. From the renaissance to Boole algebraic approaches to effective process replaced the logics of the ancients and all but enunciated the meaning of Satisfiability for propositional logic. Clarification of the concept is credited to Tarski in working out necessary and sufficient conditions for “p is true” for any formula p in first-order syntax. At about the same time, the study of effective process increased in importance with the resulting development of lambda calculus, recursive function theory, and Turing machines, all of which became the foundations of computer science and are linked to the notion of Satisfiability. Shannon provided the link to the computer age and Davis and Putnam directly linked Satisfiability to automated reasoning via an algorithm which is the backbone of most modern SAT solvers. These events propelled the study of Satisfiability for the next several decades, reaching “epidemic proportions” in the 1990s and 2000s, and the chapter concludes with a brief history of each of the major Satisfiability-related research tracks that developed during that period.


10.14311/652 ◽  
2004 ◽  
Vol 44 (5-6) ◽  
Author(s):  
Z. Dimitrovová

The methodology for determining the upper bounds on the homogenized linear elastic properties of cellular solids, described for the two-dimensional case in Dimitrovová and Faria (1999), is extended to three-dimensional open-cell foams. Besides the upper bounds, the methodology provides necessary and sufficient conditions on optimal media. These conditions are written in terms of generalized internal forces and geometrical parameters. In some cases dependence on internal forces can be replaced by geometrical expressions. In such cases, the optimality of some medium under consideration can be verified directly from the microstructure, without any additional calculation. Some of the bounds derived in this paper are published for the first time, along with a proof of their optimality. 


2012 ◽  
Vol 22 (5) ◽  
pp. 752-770 ◽  
Author(s):  
KOHTARO TADAKI

The statistical mechanical interpretation of algorithmic information theory (AIT for short) was introduced and developed in our previous papers Tadaki (2008; 2012), where we introduced into AIT the notion of thermodynamic quantities, such as the partition function Z(T), free energy F(T), energy E(T) and statistical mechanical entropy S(T). We then discovered that in the interpretation, the temperature T is equal to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature itself as a thermodynamic quantity, namely, for each of the thermodynamic quantities above, the computability of its value at temperature T gives a sufficient condition for T ∈ (0, 1) to be a fixed point on partial randomness. In this paper, we develop the statistical mechanical interpretation of AIT further and pursue its formal correspondence to normal statistical mechanics. The thermodynamic quantities in AIT are defined on the basis of the halting set of an optimal prefix-free machine, which is a universal decoding algorithm used to define the notion of program-size complexity. We show that there are infinitely many optimal prefix-free machines that give completely different sufficient conditions for each of the thermodynamic quantities in AIT. We do this by introducing the notion of composition of prefix-free machines into AIT, which corresponds to the notion of the composition of systems in normal statistical mechanics.


Sign in / Sign up

Export Citation Format

Share Document