scholarly journals Logic Programming, Substitutions and Finite Computability

1985 ◽  
Vol 13 (186) ◽  
Author(s):  
Gudmund Skovbjerg Frandsen

<p>Apt and van Emden have studied the semantics of logic programming by means of fixed point methods. From a model theoretic point of view, their formalisation is very nice. Least and greatest fixed points correspond to least and greatest Herbrand-models respectively.</p><p>Viewed operationally, there is an ugly asymmetry. The least fixed point expresses finite computability, but the greatest fixed point denotes negation by <em>trans</em>-finite failure, i.e. the underlying operator is not omega-continuous for decreasing chains in general.</p><p>We use the notion of finite computability inherent in Scott domains to build a domainlike construction (the cd-domain) that offers omega-continuity for increasing and decreasing chains equally. On this basis negation by finite failure is expressed in terms of a fixed point.</p><p>The fixed point semantics of Apt and van Emden is very abstract concerning the concept of substitution, although it is fundamental for any implementation. Hence it becomes quite tedious to prove the correctness of a concrete resolution algorithm. The fixed point semantics of this paper offers an intermediate step in this respect. Any commitments to specific resolution strategies are avoided, and the semantics may be the basis of sequential and parallel implementations equally. Simultaneously the set of substitution dataobjects is structured by a Scott information theoretic partial order, namely the cd-domain.</p>

1992 ◽  
Vol 17 (4) ◽  
pp. 285-317
Author(s):  
Johan Van Benthem

Starting from a general dynamic analysis of reasoning and programming, we develop two main dynamic perspectives upon logic programming. First, the standard fixed point semantics for Horn clause programs naturally supports imperative programming styles. Next, we provide axiomatizations for Prolog-type inference engines using calculi of sequents employing modified versions of standard structural rules such as monotonicity or permutation. Finally, we discuss the implications of all this for a broader enterprise of ‘abstract proof theory’.


Author(s):  
Charles Bouillaguet ◽  
Claire Delaplace ◽  
Pierre-Alain Fouque

The 3SUM problem is a well-known problem in computer science and many geometric problems have been reduced to it. We study the 3XOR variant which is more cryptologically relevant. In this problem, the attacker is given black-box access to three random functions F,G and H and she has to find three inputs x, y and z such that F(x) ⊕ G(y) ⊕ H(z) = 0. The 3XOR problem is a difficult case of the more-general k-list birthday problem. Wagner’s celebrated k-list birthday algorithm, and the ones inspired by it, work by querying the functions more than strictly necessary from an information-theoretic point of view. This gives some leeway to target a solution of a specific form, at the expense of processing a huge amount of data. However, to handle such a huge amount of data can be very difficult in practice. This is why we first restricted our attention to solving the 3XOR problem for which the total number of queries to F, G and H is minimal. If they are n-bit random functions, it is possible to solve the problem with roughly


2009 ◽  
Vol 2009 ◽  
pp. 1-14 ◽  
Author(s):  
Ping-Feng Chen ◽  
R. Grant Steen ◽  
Anthony Yezzi ◽  
Hamid Krim

We propose a constrained version of Mumford and Shah's (1989) segmentation model with an information-theoretic point of view in order to devise a systematic procedure to segment brain magnetic resonance imaging (MRI) data for parametric -Map and -weighted images, in both 2-D and 3D settings. Incorporation of a tuning weight in particular adds a probabilistic flavor to our segmentation method, and makes the 3-tissue segmentation possible. Moreover, we proposed a novel method to jointly segment the -Map and calibrate RF Inhomogeneity (JSRIC). This method assumes theaveragevalue of white matter is the same across transverse slices in the central brain region, and JSRIC is able to rectify the flip angles to generate calibrated -Maps. In order to generate an accurate -Map, the determination of optimal flip-angles and the registration of flip-angle images are examined. Our JSRIC method is validated on two human subjects in the 2D -Map modality and our segmentation method is validated by two public databases, BrainWeb and IBSR, of -weighted modality in the 3D setting.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Miodrag J. Mihaljević ◽  
Aleksandar Kavčić ◽  
Kanta Matsuura

An encryption/decryption approach is proposed dedicated to one-way communication between a transmitter which is a computationally powerful party and a receiver with limited computational capabilities. The proposed encryption technique combines traditional stream ciphering and simulation of a binary channel which degrades channel input by inserting random bits. A statistical model of the proposed encryption is analyzed from the information-theoretic point of view. In the addressed model an attacker faces the problem implied by observing the messages through a channel with random bits insertion. The paper points out a number of security related implications of the considered channel. These implications have been addressed by estimation of the mutual information between the channel input and output and estimation of the number of candidate channel inputs for a given channel output. It is shown that deliberate and secret key controlled insertion of random bits into the basic ciphertext provides security enhancement of the resulting encryption scheme.


Author(s):  
Christian Bentz ◽  
Dimitrios Alikaniotis ◽  
Michael Cysouw ◽  
Ramon Ferrer-i-Cancho

The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics, and language sciences more generally. Information-theory gives us tools at hand to measure precisely the average amount of choice associated with words&mdash;the word entropy. Here we use three parallel corpora&mdash;encompassing ca. 450 million words in 1916 texts and 1259 languages&mdash;to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present three main results: 1) a text size of 50K tokens is sufficient for word entropies to stabilize throughout the text, 2) across languages of the world, word entropies display a unimodal distribution that is skewed to the right. This suggests that there is a trade-off between the learnability and expressivity of words across languages of the world. 3) There is a strong linear relationship between unigram entropies and entropy rates, suggesting that they are inherently linked. We discuss the implications of these results for studying the diversity and evolution of languages from an information-theoretic point of view.


Sign in / Sign up

Export Citation Format

Share Document