scholarly journals Linear capabilities for fully abstract compilation of separation-logic-verified code

2021 ◽  
Vol 31 ◽  
Author(s):  
THOMAS VAN STRYDONCK ◽  
FRANK PIESSENS ◽  
DOMINIQUE DEVRIESE

Abstract Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).

Author(s):  
Waleed Ammar ◽  
George Mulcaire ◽  
Miguel Ballesteros ◽  
Chris Dyer ◽  
Noah A. Smith

We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser’s performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.


2020 ◽  
Author(s):  
Dhanraj Vishwanath

The prevailing model of 3D vision proposes that the visual system recovers a single objective and internally consistent representation of physical 3D space based on a process of ideal-observer probabilistic inference. A significant challenge for this model has been in explaining the contents of our subjective awareness of visual space. Here I argue that integrating phenomenological observations, empirical data, evolutionary logic and neurophysiological evidence leads to the conjecture that the human conscious awareness of visual space is underwritten by multiple, sometimes mutually inconsistent, spatial encodings. By assessing four primary competencies in the conscious awareness of space, three major types of spatial encodings are conjectured. Among the most primitive of these is proposed to support the competency of the conscious awareness of distance at an ambulatory scale (operationally defined as egocentric distance) and is hypothesised to involve temporal archicortex regions. The second is proposed to support the competency of awareness of object layout and 3D shape without scale (operationally, relative depth), likely instantiated in the ventral visual stream of the neocortex. This encoding is hypothesised to have evolved from more primitive encodings that provide a depth-ordered segmentation of the visual field. The third encoding is proposed to support the competency of fine-grained awareness of intra- and inter-object spatial separation in near space (operationally, scaled or absolute depth) and instantiated in the dorsal visual stream. This encoding is conjectured to underlie the phenomenology of object solidity, spatial separation, tangibility and object realness that is often referred to as stereopsis. The combined effect of the first and third competencies (ambulatory distance and near-space scaled spatial separation) is conjectured to contribute to the feeling of spatial immersion and presence.


2021 ◽  
Vol 43 (4) ◽  
pp. 1-134
Author(s):  
Emanuele D’Osualdo ◽  
Julian Sutherland ◽  
Azadeh Farzan ◽  
Philippa Gardner

We present TaDA Live, a concurrent separation logic for reasoning compositionally about the termination of blocking fine-grained concurrent programs. The crucial challenge is how to deal with abstract atomic blocking : that is, abstract atomic operations that have blocking behaviour arising from busy-waiting patterns as found in, for example, fine-grained spin locks. Our fundamental innovation is with the design of abstract specifications that capture this blocking behaviour as liveness assumptions on the environment. We design a logic that can reason about the termination of clients that use such operations without breaking their abstraction boundaries, and the correctness of the implementations of the operations with respect to their abstract specifications. We introduce a novel semantic model using layered subjective obligations to express liveness invariants and a proof system that is sound with respect to the model. The subtlety of our specifications and reasoning is illustrated using several case studies.


Interpreting ◽  
1998 ◽  
Vol 3 (2) ◽  
pp. 163-199 ◽  
Author(s):  
Robin Setton

Existing simultaneous interpretation (SI) process models lack an account of intermediate representation compatible with the cognitive and linguistic processes inferred from corpus descriptions or psycholinguistic experimentation. Comparison of SL and TL at critical points in synchronised transcripts of German-English and Chinese-English SI shows how interpreters use procedural and intentional clues in the input to overcome typological asymmetries and build a dynamic conceptual and intentional mental model which supports fine-grained incremental comprehension. An Executive, responsible for overall co-ordination and secondary pragmatic processing, compensates at the production stage for the inevitable semantic approximations and re-injects pragmatic guidance in the target language. The methodological and cognitive assumptions for the study are provided by Relevance Theory and a 'weakly interactive' parsing model adapted to simultaneous interpretation.


Author(s):  
Chien-Ting Chen ◽  
Yoshi Shih-Chieh Huang ◽  
Yuan-Ying Chang ◽  
Chiao-Yun Tu ◽  
Chung-Ta King ◽  
...  

Author(s):  
Petar Maksimović ◽  
Sacha-Élie Ayoun ◽  
José Fragoso Santos ◽  
Philippa Gardner

AbstractWe introduce verification based on separation logic to Gillian, a multi-language platform for the development of symbolic analysis tools which is parametric on the memory model of the target language. Our work develops a methodology for constructing compositional memory models for Gillian, leading to a unified presentation of the JavaScript and C memory models. We verify the JavaScript and C implementations of the AWS Encryption SDK message header deserialisation module, specifically designing common abstractions used for both verification tasks, and find two bugs in the JavaScript and three bugs in the C implementation.


Author(s):  
Carmine Abate ◽  
Roberto Blanco ◽  
Ștefan Ciobâcă ◽  
Adrien Durier ◽  
Deepak Garg ◽  
...  

AbstractCompiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.


2019 ◽  
Vol 12 (1) ◽  
pp. 89-115
Author(s):  
Eitan Grossman

This paper sketches the integration of Greek-origin loan verbs into the valency and transitivity patterns of Coptic (Afroasiatic, Egypt), arguing that transitivities are language-specific descriptive categories, and the comparison of donor-language transitivity with target-language transitivity reveals fine-grained degrees of loan-verb integration. Based on a comparison of Coptic Transitivity and Greek Transitivity, it is shown that Greek-origin loanwords are only partially integrated into the transitivity patterns of Coptic. Specifically, while Greek-origin loan verbs have the same coding properties as native verbs in terms of the A domain, i.e., Differential Subject Marking (dsm), they differ in important respects in terms of the P domain, i.e., Differential Object Marking (dom) and Differential Object Indexing (doi). A main result of this study is that language contact – specifically, massive lexical borrowing – can induce significant transitivity splits in a language’s lexicon and grammar. Furthermore, the findings of this study cast doubt on the usefulness of an overarching cross-linguistic category of transitivity.


2009 ◽  
Vol 81 ◽  
pp. 75-85
Author(s):  
S. Winkler

The present paper deals with the acquisition of finiteness in German and Dutch child language. More specifically, it discusses the assumption of fundamental similarities in the development of the finiteness category in German and Dutch L1 as postulated by Dimroth et al. (2003). A comparison of German and Dutch child corpus data will show that Dimroth et al.'s assumption can be maintained as far as the overall development of the finiteness category is concerned. At a more fine-grained level, however, German and Dutch children exhibit different linguistic behaviour. This concerns in particular the means for the expression of early finiteness and the status of the auxiliary hebben/haben 'to have'. The observed differences can be explained as the result of target language specific properties of the input.


Sign in / Sign up

Export Citation Format

Share Document