function calls
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 43)

H-INDEX

14
(FIVE YEARS 3)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-30
Author(s):  
Matthew Kolosick ◽  
Shravan Narayan ◽  
Evan Johnson ◽  
Conrad Watt ◽  
Michael LeMay ◽  
...  

Software sandboxing or software-based fault isolation (SFI) is a lightweight approach to building secure systems out of untrusted components. Mozilla, for example, uses SFI to harden the Firefox browser by sandboxing third-party libraries, and companies like Fastly and Cloudflare use SFI to safely co-locate untrusted tenants on their edge clouds. While there have been significant efforts to optimize and verify SFI enforcement, context switching in SFI systems remains largely unexplored: almost all SFI systems use heavyweight transitions that are not only error-prone but incur significant performance overhead from saving, clearing, and restoring registers when context switching. We identify a set of zero-cost conditions that characterize when sandboxed code has sufficient structured to guarantee security via lightweight zero-cost transitions (simple function calls). We modify the Lucet Wasm compiler and its runtime to use zero-cost transitions, eliminating the undue performance tax on systems that rely on Lucet for sandboxing (e.g., we speed up image and font rendering in Firefox by up to 29.7% and 10% respectively). To remove the Lucet compiler and its correct implementation of the Wasm specification from the trusted computing base, we (1) develop a static binary verifier , VeriZero, which (in seconds) checks that binaries produced by Lucet satisfy our zero-cost conditions, and (2) prove the soundness of VeriZero by developing a logical relation that captures when a compiled Wasm function is semantically well-behaved with respect to our zero-cost conditions. Finally, we show that our model is useful beyond Wasm by describing a new, purpose-built SFI system, SegmentZero32, that uses x86 segmentation and LLVM with mostly off-the-shelf passes to enforce our zero-cost conditions; our prototype performs on-par with the state-of-the-art Native Client SFI system.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 338
Author(s):  
Matevž Pustišek ◽  
Min Chen ◽  
Andrej Kos ◽  
Anton Kos

Blockchain ecosystems are rapidly maturing and meeting the needs of business environments (e.g., industry, manufacturing, and robotics). The decentralized approaches in industries enable novel business concepts, such as machine autonomy and servitization of manufacturing environments. Introducing the distributed ledger technology principles into the machine sharing and servitization economy faces several challenges, and the integration opens new interesting research questions. Our research focuses on data and event models and secure upgradeable smart contract platforms for machine servitization. Our research indicates that with the proposed approaches, we can efficiently separate on- and off-chain data and assure scalability of the DApp without compromising the trust. We demonstrate that the secure upgradeable smart contract platform, which was adapted for machine servitization, supports the business workflow and, at the same time, assures common identification and authorization of all the participants in the system, including people, devices, and legal entities. We present a hybrid decentralized application (DApp) for the servitization of 3D printing. The solution can be used for or easily adapted to other manufacturing domains. It comprises a modular, upgradeable smart contract platform and off-chain machine, customer and web management, and monitoring interfaces. We pay special attention to the data and event models during the design, which are fundamental for the hybrid data storage and DApp architecture and the responsiveness of off-chain interfaces. The smart contract platform uses a proxy contract to control the access of smart contracts and role-based access control in function calls for blockchain users. We deploy and evaluate the DApp in a consortium blockchain network for performance and privacy. All the actors in the solution, including the machines, are identified by their blockchain accounts and are compeers. Our solution thus facilitates integration with the traditional information-communication systems in terms of the hybrid architectures and security standards for smart contract design comparable to those in traditional software engineering.


2021 ◽  
Vol 28 (4) ◽  
pp. 414-433
Author(s):  
Hans De Nivelle

We present a tableaux procedure that checks logical relations between recursively defined subtypes of recursively defined types and apply this procedure to the problem of resolving ambiguous names in a programming language. This work is part of a project to design a new programming language suitable for efficient implementation of logic. Logical formulas are tree-like structures with many constructors having different arities and argument types. Algorithms that use these structures must perform case analysis on the constructors, and access subtrees whose type and existence depend on the constructor used. In many programming languages, case analysis is handled by matching, but we want to take a different approach, based on recursively defined subtypes. Instead of matching a tree against different constructors, we will classify it by using a set of disjoint subtypes. Subtypes are more general than structural forms based on constructors, we expect that they can be implemented more efficiently, and in addition can be used in static type checking. This makes it possible to use recursively defined subtypes as preconditions or postconditions of functions. We define the types and the subtypes (which we will call adjectives), define their semantics, and give a tableaux-based inclusion checker for adjectives. We show how to use this inclusion checker for resolving ambiguous field references in declarations of adjectives. The same procedure can be used for resolving ambiguous function calls.


Author(s):  
Yang Gao ◽  
◽  
Xia Yang ◽  
Wensheng Guo ◽  
Xiutai Lu

MILS partition scheduling module ensures isolation of data between different domains completely by enforcing secure strategies. Although small in size, it involves complicated data structures and algorithms that make monolithic verification of the scheduling module difficult using traditional verification logic (e.g., separation logic). In this paper, we simplify the verification task by dividing data representation and data operation into different layers and then to link them together by composing a series of abstraction layers. The layered method also supports function calls from higher implementation layers into lower abstraction layers, allowing us to ignore implementation details in the lower implementation layers. Using this methodology, we have verified a realistic MILS partition scheduling module that can schedule operating systems (Ubuntu 14.04, VxWorks 6.8 and RTEMS 11.0) located in different domains. The entire verification has been mechanized in the Coq Proof Assistant.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Timothy Bourke ◽  
Paul Jeanmaire ◽  
Basile Pesin ◽  
Marc Pouzet

Dataflow languages allow the specification of reactive systems by mutually recursive stream equations, functions, and boolean activation conditions called clocks. Lustre and Scade are dataflow languages for programming embedded systems. Dataflow programs are compiled by a succession of passes. This article focuses on the normalization pass which rewrites programs into the simpler form required for code generation. Vélus is a compiler from a normalized form of Lustre to CompCert’s Clight language. Its specification in the Coq interactive theorem prover includes an end-to-end correctness proof that the values prescribed by the dataflow semantics of source programs are produced by executions of generated assembly code. We describe how to extend Vélus with a normalization pass and to allow subsampled node inputs and outputs. We propose semantic definitions for the unrestricted language, divide normalization into three steps to facilitate proofs, adapt the clock type system to handle richer node definitions, and extend the end-to-end correctness theorem to incorporate the new features. The proofs require reasoning about the relation between static clock annotations and the presence and absence of values in the dynamic semantics. The generalization of node inputs requires adding a compiler pass to ensure the initialization of variables passed in function calls.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1901
Author(s):  
Ji-Chang Son ◽  
Kyung-Pyo Yi ◽  
Dong-Kuk Lim

In this paper, internal division point genetic algorithm (IDP-GA) was proposed to lessen the computational burden of multi-variable multi-objective optimization problem using finite element analysis such as optimal design of electric bicycles. The IDP-GA could consider various objectives with normalized weighted sum method and could reduce the number of function calls with novel crossover strategy and vector-based pattern search method. The superiority of the proposed algorithm was verified by comparing performances with conventional optimization method at two mathematical test functions. Finally, the applicability of the IDP-GA in practical electric machine design was verified by successfully deriving an improved design of electric bicycle propulsion motor.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-20
Author(s):  
Aviral Goel ◽  
Jan Ječmen ◽  
Sebastián Krynski ◽  
Olivier Flückiger ◽  
Jan Vitek

Function calls in the R language do not evaluate their arguments, these are passed to the callee as suspended computations and evaluated if needed. After 25 years of experience with the language, there are very few cases where programmers leverage delayed evaluation intentionally and laziness comes at a price in performance and complexity. This paper explores how to evolve the semantics of a lazy language towards strictness-by-default and laziness-on-demand. To provide a migration path, it is necessary to provide tooling for developers to migrate libraries without introducing errors. This paper reports on a dynamic analysis that infers strictness signatures for functions to capture both intentional and accidental laziness. Over 99% of the inferred signatures were correct when tested against clients of the libraries.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Yann Herklotz ◽  
James D. Pollard ◽  
Nadesh Ramanathan ◽  
John Wickerson

High-level synthesis (HLS), which refers to the automatic compilation of software into hardware, is rapidly gaining popularity. In a world increasingly reliant on application-specific hardware accelerators, HLS promises hardware designs of comparable performance and energy efficiency to those coded by hand in a hardware description language such as Verilog, while maintaining the convenience and the rich ecosystem of software development. However, current HLS tools cannot always guarantee that the hardware designs they produce are equivalent to the software they were given, thus undermining any reasoning conducted at the software level. Furthermore, there is mounting evidence that existing HLS tools are quite unreliable, sometimes generating wrong hardware or crashing when given valid inputs. To address this problem, we present the first HLS tool that is mechanically verified to preserve the behaviour of its input software. Our tool, called Vericert, extends the CompCert verified C compiler with a new hardware-oriented intermediate language and a Verilog back end, and has been proven correct in Coq. Vericert supports most C constructs, including all integer operations, function calls, local arrays, structs, unions, and general control-flow statements. An evaluation on the PolyBench/C benchmark suite indicates that Vericert generates hardware that is around an order of magnitude slower (only around 2× slower in the absence of division) and about the same size as hardware generated by an existing, optimising (but unverified) HLS tool.


2021 ◽  
Author(s):  
Robert Gove

Recently, the number of observed malware samples has rapidly increased, expanding the workload for malware analysts. Most of these samples are not truly unique, but are related through shared attributes. Identifying these attributes can enable analysts to reuse analysis and reduce their workload. Visualizing malware attributes as sets could enable analysts to better understand the similarities and differences between malware. However, existing set visualizations have difficulty displaying hundreds of sets with thousands of elements, and are not designed to compare different types of elements between sets, such as the imported DLLs and callback domains across malware samples. Such analysis might help analysts, for example, to understand if a group of malware samples are behaviorally different or merely changing where they send data.To support comparisons between malware samples’ attributes we developed the Similarity Evidence Explorer for Malware (SEEM), a scalable visualization tool for simultaneously comparing a large corpus of malware across multiple sets of attributes (such as the sets of printable strings and function calls). SEEM’s novel design breaks down malware attributes into sets of meaningful categories to compare across malware samples, and further incorporates set comparison overviews and dynamic filtering to allow SEEM to scale to hundreds of malware samples while still allowing analysts to compare thousands of attributes between samples. We demonstrate how to use SEEM by analyzing a malware sample from the Mandiant APT1 New York Times intrusion dataset. Furthermore, we describe a user study with five cyber security researchers who used SEEM to rapidly and successfully gain insight into malware after only 15 minutes of training.


PLoS Genetics ◽  
2021 ◽  
Vol 17 (9) ◽  
pp. e1009774
Author(s):  
Payel Ganguly ◽  
Landiso Madonsela ◽  
Jesse T. Chao ◽  
Christopher J. R. Loewen ◽  
Timothy P. O’Connor ◽  
...  

Gene variant discovery is becoming routine, but it remains difficult to usefully interpret the functional consequence or disease relevance of most variants. To fill this interpretation gap, experimental assays of variant function are becoming common place. Yet, it remains challenging to make these assays reproducible, scalable to high numbers of variants, and capable of assessing defined gene-disease mechanism for clinical interpretation aligned to the ClinGen Sequence Variant Interpretation (SVI) Working Group guidelines for ‘well-established assays’. Drosophila melanogaster offers great potential as an assay platform, but was untested for high numbers of human variants adherent to these guidelines. Here, we wished to test the utility of Drosophila as a platform for scalable well-established assays. We took a genetic interaction approach to test the function of ~100 human PTEN variants in cancer-relevant suppression of PI3K/AKT signaling in cellular growth and proliferation. We validated the assay using biochemically characterized PTEN mutants as well as 23 total known pathogenic and benign PTEN variants, all of which the assay correctly assigned into predicted functional categories. Additionally, function calls for these variants correlated very well with our recent published data from a human cell line. Finally, using these pathogenic and benign variants to calibrate the assay, we could set readout thresholds for clinical interpretation of the pathogenicity of 70 other PTEN variants. Overall, we demonstrate that Drosophila offers a powerful assay platform for clinical variant interpretation, that can be used in conjunction with other well-established assays, to increase confidence in the accurate assessment of variant function and pathogenicity.


Sign in / Sign up

Export Citation Format

Share Document