A Computational Theory of Meaning

2020 ◽  
pp. 32-78
Author(s):  
Pieter Adriaans

A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.

2001 ◽  
Vol 13 (7) ◽  
pp. 1443-1471 ◽  
Author(s):  
Bernhard Schölkopf ◽  
John C. Platt ◽  
John Shawe-Taylor ◽  
Alex J. Smola ◽  
Robert C. Williamson

Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.


2015 ◽  
Vol 25 (5) ◽  
pp. 734-766 ◽  
Author(s):  
Fang Zhao ◽  
Joseph Wallis ◽  
Mohini Singh

Purpose – The purpose of this paper is to capture and understand the nature of the relationship between e-government development and the digital economy. Design/methodology/approach – Drawing on the Technology Acceptance Model and Fountain’s technology enactment theory, a multidimensional research model was developed. The model was tested empirically through an international study of 67 countries using reputable archival data, primarily including the UN’s e-government survey and the Economist Intelligence Unit’s digital economy rankings. Findings – The empirical findings indicate a strong positive reciprocal (two-way) relationship between e-government development and the digital economy. This finding provides empirical evidence to support the general notion of “co-evolution” between technology and organisations. The study also finds that along with social, economic, political, technological and demographic factors, certain national cultural characteristics have significant effects on the digital economy and e-government development. Research limitations/implications – Relying on archival global data sets, this study is constrained by the coverage and formulation of the data set indices, the sample size (67 countries), and the impossibility of detecting errors that may occur in the process of data collection. Therefore, caution should be taken when making generalisations about the findings of this study. Originality/value – The paper addresses a deficit of empirical research that is supported by sound and established theories to explain short-term dynamics and the long-term impact of the digital economy on public administration. The study contributes to a more accurate and comprehensive understanding of the dynamic relationship between e-government development and the digital economy.


2014 ◽  
Author(s):  
Graham Jones ◽  
Bengt Oxelman

Motivation: The multispecies coalescent model provides a formal framework for the assignment of individual organisms to species, where the species are modeled as the branches of the species tree. None of the available approaches so far have simultaneously co-estimated all the relevant parameters in the model, without restricting the parameter space by requiring a guide tree and/or prior assignment of individuals to clusters or species. Results: We present DISSECT, which explores the full space of possible clusterings of individuals and species tree topologies in a Bayesian framework. It uses an approximation to avoid the need for reversible-jump MCMC, in the form of a prior that is a modification of the birth-death prior for the species tree. It incorporates a spike near zero in the density for node heights. The model has two extra parameters: one controls the degree of approximation, and the second controls the prior distribution on the numbers of species. It is implemented as part of BEAST and requires only a few changes from a standard *BEAST analysis. The method is evaluated on simulated data and demonstrated on an empirical data set. The method is shown to be insensitive to the degree of approximation, but quite sensitive to the second parameter, suggesting that large numbers of sequences are needed to draw firm conclusions. Availability:http://code.google.com/p/beast-mcmc/, http://www.indriid.com/dissectinbeast.html Contact:[email protected], www.indriid.com Supplementary information: Supplementary material is available.


2020 ◽  
Author(s):  
Michael J. Casey ◽  
Rubén J. Sánchez-García ◽  
Ben D. MacArthur

ABSTRACTSingle-cell sequencing (sc-Seq) experiments are producing increasingly large data sets. However, large data sets do not necessarily contain large amounts of information. Here, we introduce a formal framework for assessing the amount of information obtained from a sc-Seq experiment, which can be used throughout the sc-Seq analysis pipeline, including for quality control, feature selection and cluster evaluation. We illustrate this framework with some simple examples, including using it to quantify the amount of information in a single-cell sequencing data set that is explained by a proposed clustering, and thereby to determine cluster quality. Our information-theoretic framework provides a formal way to assess the quality of data obtained from sc-Seq experiments and the effectiveness of analyses performed, with wide implications for our understanding of variability in gene expression patterns within heterogeneous cell populations.


2015 ◽  
Vol 20 (3) ◽  
pp. 291-310 ◽  
Author(s):  
Pedro Jodra ◽  
Maria Dolores Jimenez-Gamero ◽  
Maria Virtudes Alba-Fernandez

The Muth distribution is a continuous random variable introduced in the context of reliability theory. In this paper, some mathematical properties of the model are derived, including analytical expressions for the moment generating function, moments, mode, quantile function and moments of the order statistics. In this regard, the generalized integro-exponential function, the Lambert W function and the golden ratio arise in a natural way. The parameter estimation of the model is performed by the methods of maximum likelihood, least squares, weighted least squares and moments, which are compared via a Monte Carlo simulation study. A natural extension of the model is considered as well as an application to a real data set.


2000 ◽  
Vol 10 (05) ◽  
pp. 1019-1032 ◽  
Author(s):  
M. V. CORRÊA ◽  
L. A. AGUIRRE ◽  
E. M. A. M. MENDES

This paper investigates the application of discrete nonlinear rational models, a natural extension of the well-known polynomial models. Rational models are discussed in the context of two different problems: reconstruction of chaotic attractors from a time series and the estimation of static nonlinearities from dynamical data. Rational models are obtained via black box identification techniques which only need a relatively short data set. A simple modified algorithm is proposed to handle the noise thus providing a solution to one of the greatest obstacles for estimating rational models from real data. The suggested algorithm and related ideas are tested and discussed using Rössler's equations, real data collected from an implementation of Chua's circuit, logistic map, sine-map with cubic-type nonlinearities, tent map and a map of a feedback buck switching regulator model.


1984 ◽  
Vol 49 (4) ◽  
pp. 1284-1300 ◽  
Author(s):  
Peter Schroeder-Heister

One of the main ideas of calculi of natural deduction, as introduced by Jaśkowski and Gentzen, is that assumptions may be discharged in the course of a derivation. As regards sentential logic, this conception will be extended in so far as not only formulas but also rules may serve as assumptions which can be discharged. The resulting calculi and derivations with rules of any finite level are informally introduced in §1, while §§2 and 3 state formal definitions of the concepts involved and basic lemmata. Within this framework, a standard form for introduction and elimination rules for arbitrary n-ary sentential operators is motivated in §4, understood as a contribution to the theory of meaning for logical signs. §5 proves that the set {&, ∨, ⊃, ⋏} of standard intuitionistic connectives is complete, i.e. &, ∨, ⊃, and ⋏ suffice to express each n-ary sentential operator having rules of the standard form given in §4. §6 makes some remarks on related approaches. For an extension of the conception presented here to quantifier logic, see [11].


Author(s):  
Samson Abramsky ◽  
Giovanni Carù

We establish a strong link between two apparently unrelated topics: the study of conflicting information in the formal framework of valuation algebras, and the phenomena of non-locality and contextuality. In particular, we show that these peculiar features of quantum theory are mathematically equivalent to a general notion of disagreement between information sources. This result vastly generalizes previously observed connections between contextuality, relat- ional databases, constraint satisfaction problems and logical paradoxes, and gives further proof that contextual behaviour is not a phenomenon limited to quantum physics, but pervades various domains of mathematics and computer science. The connection allows to translate theorems, methods and algorithms from one field to the other, and paves the way for the application of generic inference algorithms to study contextuality. This article is part of the theme issue ‘Contextuality and probability in quantum mechanics and beyond’.


Author(s):  
J. E. Wolff

This chapter introduces the representational theory of measurement as the relevant formal framework for a metaphysics of quantities. After presenting key elements of the representational approach, axioms for different measurement structures are presented and their representation and uniqueness theorems are compared. Particular attention is given to Hölder’s theorem, which in the first instance describes conditions for quantitativeness for additive extensive structures, but which can be generalized to more abstract structures. The last section discusses the relationship between uniqueness, the hierarchy of scales, and the measurement-theoretic notion of meaningfulness. This chapter provides the basis for Chapter 6, which makes use of more abstract results in measurement theory.


Sign in / Sign up

Export Citation Format

Share Document