target property
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 43 (4) ◽  
pp. 1-48
Author(s):  
Carmine Abate ◽  
Roberto Blanco ◽  
Ştefan Ciobâcă ◽  
Adrien Durier ◽  
Deepak Garg ◽  
...  

Compiler correctness, in its simplest form, is defined as the inclusion of the set of traces of the compiled program in the set of traces of the original program. This is equivalent to the preservation of all trace properties. Here, traces collect, for instance, the externally observable events of each execution. However, this definition requires the set of traces of the source and target languages to be the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many definitions of secure compilation, which characterize the protection of a compiled program linked against adversarial code.


2021 ◽  
Author(s):  
Angshuman Deka ◽  
Anand Balu Nellippallil ◽  
John Hall

Abstract Additive manufacturing (AM) can produce complex geometrical shapes and multi-material parts that are not possible using typical manufacturing processes. The properties of multi-material AM parts are often unknown. For multi-material parts made using Fused Deposition Modeling (FDM), these properties are driven by the filament. Acquiring the properties of the products or the filament necessitates experiments that can be expensive and time-consuming. Thus, there is a need for simulation-based design tools that can determine the multi-material properties of the filament by exploring the complex process-structure-property (p-s-p) relationship. In this paper, we present a Goal-Oriented Inverse Design (GoID) method to produce feedstock filament for FDM process with specific property goals. Using this method, the designers connects the structure and property in the p-s-p relationship by identifying satisficing material composition for specific property goals. The filament properties identified in the problem are percentage elongation, tensile strength, and Young’s Modulus. The problem is formulated using a generic decision-based design framework, Concept Exploration Framework. The solution space exploration for satisficing solutions is performed using the compromise Decision Support Problem (cDSP). The forward information flow is first established to generate the necessary mathematical relationships between the composition and the property goals. Next, the target property goals of the filament are set. The cDSP is used for solution space exploration to identify satisficing solutions for material composition for the target property goals. While the results are interesting, the focus of our work is to demonstrate, and refine, the goal-oriented, inverse design method for the AM domain.


2021 ◽  
Author(s):  
Andrew Falkowski ◽  
Steven Kauwe ◽  
Taylor Sparks

Traditional, data-driven materials discovery involves screening chemical systems with machine learning algorithms and selecting candidates that excel in a target property. The number of screening candidates grows infinitely large as the fractional resolution of compositions the number of included elements increases. The computational infeasibility and probability of overlooking a successful candidate grow likewise. Our approach shifts the optimization focus from model parameters to the fractions of each element in a composition. Using a pretrained network, CrabNet, and writing a custom loss function to govern a vector of element fractions, compositions can be optimized such that a predicted property is maximized or minimized. Single and multi-property optimization examples are presented that highlight the capabilities and robustness of this approach to inverse design.


2021 ◽  
Author(s):  
Andrew Falkowski ◽  
Steven Kauwe ◽  
Taylor Sparks

Traditional, data-driven materials discovery involves screening chemical systems with machine learning algorithms and selecting candidates that excel in a target property. The number of screening candidates grows infinitely large as the fractional resolution of compositions the number of included elements increases. The computational infeasibility and probability of overlooking a successful candidate grow likewise. Our approach shifts the optimization focus from model parameters to the fractions of each element in a composition. Using a pretrained network, CrabNet, and writing a custom loss function to govern a vector of element fractions, compositions can be optimized such that a predicted property is maximized or minimized. Single and multi-property optimization examples are presented that highlight the capabilities and robustness of this approach to inverse design.


Molecules ◽  
2021 ◽  
Vol 26 (11) ◽  
pp. 3237
Author(s):  
Artem A. Mitrofanov ◽  
Petr I. Matveev ◽  
Kristina V. Yakubova ◽  
Alexandru Korotcov ◽  
Boris Sattarov ◽  
...  

Modern structure–property models are widely used in chemistry; however, in many cases, they are still a kind of a “black box” where there is no clear path from molecule structure to target property. Here we present an example of deep learning usage not only to build a model but also to determine key structural fragments of ligands influencing metal complexation. We have a series of chemically similar lanthanide ions, and we have collected data on complexes’ stability, built models, predicting stability constants and decoded the models to obtain key fragments responsible for complexation efficiency. The results are in good correlation with the experimental ones, as well as modern theories of complexation. It was shown that the main influence on the constants had a mutual location of the binding centers.


Author(s):  
Joana Teixeira

The present study investigates the effects of explicit grammar teaching on the acquisition of a core syntactic property (the ungrammaticality of free inversion) and a syntax-discourse property (the unacceptability of locative inversion with informationally heavy verbs) by advanced and upper intermediate Portuguese learners of English. The study followed a pre-test/post-test design. Its results reveal that, at an upper intermediate level, explicit teaching did not have any effects on learners’ performance, regardless of the type of property. At an advanced level, in contrast, the teaching intervention resulted in gains in all cases. However, these gains were only maintained beyond the immediate teaching period when the target property was strictly syntactic. These findings indicate that the effectiveness of instruction depends on the stage of development at which learners are and on the type of target property. The pedagogical implications of the findings are discussed in detail.


2020 ◽  
Author(s):  
Belinda Xie ◽  
Danielle Navarro ◽  
Brett Hayes

The extent to which we generalize a novel property from a sample of familiar instances to novel instances depends on the sample composition. Previous property induction experiments have only used samples consisting of novel types (unique entities). Because real-world evidence samples often contain redundant tokens (repetitions of the same entity), we studied the effects on property induction of adding types and tokens to an observed sample. In Experiments 1-3, we presented participants with a sample of birds or flowers known to have a novel property and probed whether this property generalized to novel items varying in similarity to the initial sample. Increasing the number of novel types (e.g., new birds with the target property) in a sample produced tightening, promoting property generalization to highly similar stimuli but decreasing generalization to less similar stimuli. On the other hand, increasing the number of tokens (e.g., repeated presentations of the same bird with the target property) had little effect on generalization. Experiment 4 showed that repeated tokens are encoded and can benefit recognition, but are subsequently given little weight when inferring property generalization. We modified an existing Bayesian model of induction (Navarro, Dry & Lee, 2012) to account for both the information added by new types and the discounting of information conveyed by tokens.


Author(s):  
Carmine Abate ◽  
Roberto Blanco ◽  
Ștefan Ciobâcă ◽  
Adrien Durier ◽  
Deepak Garg ◽  
...  

AbstractCompiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.


Author(s):  
Hong Hu ◽  
Chenxiong Qian ◽  
Carter Yagemann ◽  
Simon Pak Ho Chung ◽  
William R. Harris ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document