minimal description
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Iris van de Pol ◽  
Paul Lodder ◽  
Leendert van Maanen ◽  
Shane Steinert-Threlkeld ◽  
Jakub Szymanik

Despite wide variation among natural languages, there are linguistic properties thought to be universal to all or almost all natural languages. Here, we consider universals at the semantic level, in the domain of quantifiers, which are given by the properties of monotonicity, quantity, and conservativity. We investigate whether these universals might be explained by differences in complexity. We generate a large collection of quantifiers, based on a simple yet expressive grammar, and compute both their complexities and whether they adhere to these universal properties. We find that quantifiers satisfying semantic universals are less complex: they have a shorter minimal description length.


2021 ◽  
Vol 8 ◽  
Author(s):  
Lennart Dabelow ◽  
Ralf Eichhorn

Active matter systems are driven out of equilibrium by conversion of energy into directed motion locally on the level of the individual constituents. In the spirit of a minimal description, active matter is often modeled by so-called active Ornstein-Uhlenbeck particles an extension of passive Brownian motion where activity is represented by an additional fluctuating non-equilibrium “force” with simple statistical properties (Ornstein-Uhlenbeck process). While in passive Brownian motion, entropy production along trajectories is well-known to relate to irreversibility in terms of the log-ratio of probabilities to observe a certain particle trajectory forward in time in comparison to observing its time-reversed twin trajectory, the connection between these concepts for active matter is less clear. It is therefore of central importance to provide explicit expressions for the irreversibility of active particle trajectories based on measurable quantities alone, such as the particle positions. In this technical note we derive a general expression for the irreversibility of AOUPs in terms of path probability ratios (forward vs. backward path), extending recent results from [PRX 9, 021009 (2019)] by allowing for arbitrary initial particle distributions and states of the active driving.


2020 ◽  
Vol 12 (12) ◽  
pp. 212
Author(s):  
Piotr Artiemjew ◽  
Lada Rudikova ◽  
Oleg Myslivets

One of the developmental directions of Future Internet technologies is the implementation of artificial intelligence systems for manipulating data and the surrounding world in a more complex way. Rule-based systems, very accessible for people’s decision-making, play an important role in the family of computational intelligence methods. The use of decision-making rules along with decision trees are one of the simplest forms of presenting complex decision-making processes. Decision support systems, according to the cross-industry standard process for data mining (CRISP-DM) framework, require final embedding of the learned model in a given computer infrastructure, integrated circuits, etc. In this work, we deal with the topic concerning placing the learned rule-based model of decision support in the database environment-exactly in the SQL database tables. Our main goal is to place the previously trained model in the database and apply it by means of single queries. In our work we assume that the decision-making rules applied are mutually consistent and additionally the Minimal Description Length (MDL) rule is introduced. We propose a universal solution for any IF THEN rule induction algorithm.


2020 ◽  
pp. 233-272
Author(s):  
Samuel Lebens

This chapter sketches the general shape of Jewish eschatology. After laying out the minimal description that the Messiah is expected to fulfill, this chapter argues that the philosophy of time, properly understood, allows us to hope for a much more radical end of days; one that doesn’t just redeem our future, but one that also redeems our past.


2020 ◽  
Vol 34 (3) ◽  
pp. 291-301
Author(s):  
Franz Baader ◽  
Clément Théron

Abstract We investigate the impact that general concept inclusions and role-value maps have on the complexity and decidability of reasoning in the description logic $$\mathcal{FL}_0$$ FL 0 . On the one hand, we give a more direct proof for ExpTime-hardness of subsumption w.r.t. general concept inclusions in $$\mathcal{FL}_0$$ FL 0 . On the other hand, we determine restrictions on role-value maps that ensure decidability of subsumption, but we also show undecidability for the cases where these restrictions are not satisfied.


2020 ◽  
Vol 375 (1797) ◽  
pp. 20190358 ◽  
Author(s):  
Daniel Nettle

For decades, parts of the literature on human culture have been gripped by an analogy: culture changes in a way that is substantially isomorphic to genetic evolution. This leads to a number of sub-claims: that design-like properties in cultural traditions should be explained in a parallel way to the design-like features of organisms, namely with reference to selection; that culture is a system of inheritance; and that cultural evolutionary processes can produce adaptation in the genetic sense. The Price equation provides a minimal description of any evolutionary system, and a method for identifying the action of selection. As such, it helps clarify some of these claims about culture conceptually. Looking closely through the lens of the Price equation, the differences between genes and culture come into sharp relief. Culture is only a system of inheritance metaphorically, or as an idealization, and the idealization may lead us to overlook causally important features of how cultural influence works. Design-like properties in cultural systems may owe more to transmission biases than to cultural selection. Where culture enhances genetic fitness, it is ambiguous whether what is doing the work is cultural transmission, or just the genetically evolved properties of the mind. I conclude that there are costs to trying to press culture into a template based on Darwinian evolution, even if one broadens the definition of ‘Darwinian’. This article is part of the theme issue ‘Fifty years of the Price equation’.


2020 ◽  
Author(s):  
Samuel Planton ◽  
Timo van Kerkoerle ◽  
Leïla Abbih ◽  
Maxime Maheu ◽  
Florent Meyniel ◽  
...  

The capacity to store information in working memory strongly depends upon the ability to recode the information in a compressed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using a recursive compression algorithm akin to a “language of thought”, and capable of capturing nested patterns of repetitions and alternations. In five experiments, we probed memory for auditory or visual sequences using both subjective and objective measures. We used a sequence violation paradigm in which participants detected occasional violations in an otherwise fixed sequence. Both subjective ratings of complexity and objective sequence violation detection rates were well predicted by complexity, as measured by minimal description length (also known as Kolmogorov complexity) in the binary version of the “language of geometry”, a formal language previously found to account for the human encoding of complex spatial sequences in the proposed language. We contrasted the language model with a model based solely on surprise given the stimulus transition probabilities. While both models accounted for variance in the data, the language model dominated over the transition probability model for long sequences (with a number of elements far exceeding the limits of working memory). We use model comparison to show that the minimal description length in a recursive language provides a better fit than a variety of previous encoding models for sequences. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.


2020 ◽  
Author(s):  
Fosca Al Roumi ◽  
Sébastien Marti ◽  
Liping Wang ◽  
Marie Amalric ◽  
Stanislas Dehaene

AbstractHow does the human brain store sequences of spatial locations? The standard view is that each consecutive item occupies a distinct slot in working memory. Here, we formulate and test the alternative hypothesis that the human brain compresses the whole sequence using an abstract, language-like code that captures the numerical and geometrical regularities of the sequence at multiple nested levels. We exposed participants to spatial sequences of fixed length but variable regularity, and asked them to remember the sequence in order to detect deviants, while their brain activity was recorded using magneto-encephalography. Using multivariate decoders, each successive location could be decoded from brain signals, and upcoming locations were anticipated prior to their actual onset. Crucially, sequences with lower complexity, defined as the minimal description length provided by the formal language, and whose memory representation was therefore predicted to be more compressed, led to lower error rates and to increased anticipations. Furthermore, neural codes specific to the numerical and geometrical primitives of the postulated language could be detected, both in isolation and within the sequences. These results suggest that the human brain detects sequence regularities at multiple nested levels and uses them to compress long sequences in working memory.


Soft Matter ◽  
2015 ◽  
Vol 11 (34) ◽  
pp. 6740-6746 ◽  
Author(s):  
Tamoghna Das ◽  
T. Lookman ◽  
M. M. Bandi

A single dimensionless parameter is proposed to characterise the morphology of two-dimensional aggregates by their structural randomness.


2014 ◽  
Vol 25 (07) ◽  
pp. 917-932
Author(s):  
CEZAR CÂMPEANU

Algorithmic Information Theory is based on the notion of descriptional complexity known as Chaitin-Kolmogorov complexity, defined in the '60s in terms of minimal description length. Blum Static Complexity spaces defined using Blum axioms, and Encoded Function spaces defined using properties of the complexity function, were introduced in 2012 to generalize the concept of descriptional complexity. In formal language theory we also use the concept of descriptional complexity for the number of states, or the number of transitions in a minimal finite automaton accepting a regular language, and apparently, this number has no connection to the general case of descriptional complexity. In this paper we prove that all the definitions of descriptional complexity, including complexity of operations, can be defined within the framework of Encoded Blum Static Complexity spaces, which extend both Blum Static Complexity spaces and Encoded Function spaces.


Sign in / Sign up

Export Citation Format

Share Document