A High-Level Implementation of a System for Automated Reasoning with Default Rules (System Description)

Author(s):  
Christoph Beierle ◽  
Gabriele Kern-Isberner ◽  
Nicole Koch
2015 ◽  
Vol 16 (2) ◽  
pp. 189-235 ◽  
Author(s):  
DANIELA INCLEZAN ◽  
MICHAEL GELFOND

AbstractThe paper introduces a new modular action language,${\mathcal ALM}$, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993,Journal of Logic Programming 17, 2–4, 301–321; 1998,Electronic Transactions on AI 3, 16, 193–210) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that requirestructuring of knowledge.$\mathcal{ALM}$is meant to remedy this problem. Structuring of knowledge in${\mathcal ALM}$is supported by the concepts ofmodule(a formal description of a specific piece of knowledge packaged as a unit),module hierarchy, andlibrary, and by the division of a system description of${\mathcal ALM}$into two parts:theoryandstructure. Atheoryconsists of one or more modules with a common theme, possibly organized into a module hierarchy based on adependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them.Structuresare used to describe the domain's objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries.


2019 ◽  
Vol 59 (5) ◽  
pp. 518-526
Author(s):  
Michael Vetter

Finding potential security weaknesses in any complex IT system is an important and often challenging task best started in the early stages of the development process. We present a method that transforms this task for FPGA designs into a reinforcement learning (RL) problem. This paper introduces a method to generate a Markov Decision Process based RL model from a formal, high-level system description (formulated in the domain-specific language) of the system under review and different, quantified assumptions about the system’s security. Probabilistic transitions and the reward function can be used to model the varying resilience of different elements against attacks and the capabilities of an attacker. This information is then used to determine a plausible data exfiltration strategy. An example with multiple scenarios illustrates the workflow. A discussion of supplementary techniques like hierarchical learning and deep neural networks concludes this paper.


Author(s):  
Grant Passmore ◽  
Simon Cruanes ◽  
Denis Ignatovich ◽  
Dave Aitken ◽  
Matt Bray ◽  
...  

1977 ◽  
Vol 6 (65) ◽  
Author(s):  
Bent Bruun Kristensen ◽  
Ole Lehrmann Madsen ◽  
Kristen Nygaard

<p>The report describes ongoing work within the Joint Language Project (JLP). Research workers from Aarhus aned Aalborg Universities. Denmark and the Norwegian Computing Center, Oslo, Norway participate in the project. The aim of the JLP is to consider new tools in programming by the development of a systems programming language BETA and a high level programming language GAMMA, both related to the system description language DELTA.</p><p>The present state of the ideas for BETA is presented. This report is also referred to as DELTA Project Working Note No. 3.</p>


1982 ◽  
Vol 11 (150) ◽  
Author(s):  
Kurt Jensen ◽  
Morten Kyng

<p>This paper presents the Epsilon language and defines its formal syntax and semantics. Epsilon is a language for the description of systems, which contain concurrent components, some of these being edp-equipment or by other means representing highly structured information handling. The actions consist of continuous changes described by equations, of communication between the components and of normal algorithmic actions.</p><p>Epsilon may be used for the description of computer systems together with their environments, e.g. production equipment and human operators. Parts of such a description may serve as the system specification from which computer programs are developed. Epsilon is not itself an implementable language.</p><p>This paper defines the semantics of Epsilon by means of a model based on high-level Petri nets, i.e. a model founded on the notion of concurrency. The model also uses denotational semantics and equation systems.</p>


Author(s):  
David P. Bazett-Jones ◽  
Mark L. Brown

A multisubunit RNA polymerase enzyme is ultimately responsible for transcription initiation and elongation of RNA, but recognition of the proper start site by the enzyme is regulated by general, temporal and gene-specific trans-factors interacting at promoter and enhancer DNA sequences. To understand the molecular mechanisms which precisely regulate the transcription initiation event, it is crucial to elucidate the structure of the transcription factor/DNA complexes involved. Electron spectroscopic imaging (ESI) provides the opportunity to visualize individual DNA molecules. Enhancement of DNA contrast with ESI is accomplished by imaging with electrons that have interacted with inner shell electrons of phosphorus in the DNA backbone. Phosphorus detection at this intermediately high level of resolution (≈lnm) permits selective imaging of the DNA, to determine whether the protein factors compact, bend or wrap the DNA. Simultaneously, mass analysis and phosphorus content can be measured quantitatively, using adjacent DNA or tobacco mosaic virus (TMV) as mass and phosphorus standards. These two parameters provide stoichiometric information relating the ratios of protein:DNA content.


Author(s):  
J. S. Wall

The forte of the Scanning transmission Electron Microscope (STEM) is high resolution imaging with high contrast on thin specimens, as demonstrated by visualization of single heavy atoms. of equal importance for biology is the efficient utilization of all available signals, permitting low dose imaging of unstained single molecules such as DNA.Our work at Brookhaven has concentrated on: 1) design and construction of instruments optimized for a narrow range of biological applications and 2) use of such instruments in a very active user/collaborator program. Therefore our program is highly interactive with a strong emphasis on producing results which are interpretable with a high level of confidence.The major challenge we face at the moment is specimen preparation. The resolution of the STEM is better than 2.5 A, but measurements of resolution vs. dose level off at a resolution of 20 A at a dose of 10 el/A2 on a well-behaved biological specimen such as TMV (tobacco mosaic virus). To track down this problem we are examining all aspects of specimen preparation: purification of biological material, deposition on the thin film substrate, washing, fast freezing and freeze drying. As we attempt to improve our equipment/technique, we use image analysis of TMV internal controls included in all STEM samples as a monitor sensitive enough to detect even a few percent improvement. For delicate specimens, carbon films can be very harsh-leading to disruption of the sample. Therefore we are developing conducting polymer films as alternative substrates, as described elsewhere in these Proceedings. For specimen preparation studies, we have identified (from our user/collaborator program ) a variety of “canary” specimens, each uniquely sensitive to one particular aspect of sample preparation, so we can attempt to separate the variables involved.


2020 ◽  
Vol 29 (4) ◽  
pp. 738-761
Author(s):  
Tess K. Koerner ◽  
Melissa A. Papesh ◽  
Frederick J. Gallun

Purpose A questionnaire survey was conducted to collect information from clinical audiologists about rehabilitation options for adult patients who report significant auditory difficulties despite having normal or near-normal hearing sensitivity. This work aimed to provide more information about what audiologists are currently doing in the clinic to manage auditory difficulties in this patient population and their views on the efficacy of recommended rehabilitation methods. Method A questionnaire survey containing multiple-choice and open-ended questions was developed and disseminated online. Invitations to participate were delivered via e-mail listservs and through business cards provided at annual audiology conferences. All responses were anonymous at the time of data collection. Results Responses were collected from 209 participants. The majority of participants reported seeing at least one normal-hearing patient per month who reported significant communication difficulties. However, few respondents indicated that their location had specific protocols for the treatment of these patients. Counseling was reported as the most frequent rehabilitation method, but results revealed that audiologists across various work settings are also successfully starting to fit patients with mild-gain hearing aids. Responses indicated that patient compliance with computer-based auditory training methods was regarded as low, with patients generally preferring device-based rehabilitation options. Conclusions Results from this questionnaire survey strongly suggest that audiologists frequently see normal-hearing patients who report auditory difficulties, but that few clinicians are equipped with established protocols for diagnosis and management. While many feel that mild-gain hearing aids provide considerable benefit for these patients, very little research has been conducted to date to support the use of hearing aids or other rehabilitation options for this unique patient population. This study reveals the critical need for additional research to establish evidence-based practice guidelines that will empower clinicians to provide a high level of clinical care and effective rehabilitation strategies to these patients.


Sign in / Sign up

Export Citation Format

Share Document