scholarly journals Making legacy Fortran code type safe through automated program transformation

Author(s):  
Wim Vanderbauwhede

AbstractFortran is still widely used in scientific computing, and a very large corpus of legacy as well as new code is written in FORTRAN 77. In general this code is not type safe, so that incorrect programs can compile without errors. In this paper, we present a formal approach to ensure type safety of legacy Fortran code through automated program transformation. The objective of this work is to reduce programming errors by guaranteeing type safety. We present the first rigorous analysis of the type safety of FORTRAN 77 and the novel program transformation and type checking algorithms required to convert FORTRAN 77 subroutines and functions into pure, side-effect free subroutines and functions in Fortran 90. We have implemented these algorithms in a source-to-source compiler which type checks and automatically transforms the legacy code. We show that the resulting code is type safe and that the pure, side-effect free and referentially transparent subroutines can readily be offloaded to accelerators.

1993 ◽  
Vol 61 (4) ◽  
pp. 383-383
Author(s):  
Michael Metcalf ◽  
John Reid ◽  
Larry Nyhoff ◽  
Sanford Leestma ◽  
Lawrence Ruby
Keyword(s):  

2021 ◽  
Author(s):  
◽  
Susan Adele Welsh

<p>Kappa opioid peptide receptors (KOPrs) are a class of opioid receptors which shown analgesic and anti-addictive properties. Nonaddictive analgesics would be beneficial as morphine, one of the most commonly prescribed opioids for chronic pain, activates the brain reward system and can lead to addiction. Although medical research is progressing rapidly, there is still no treatment for psychostimulant abuse. KOPr agonists show promise in this regard but display undesirable side effects and could negatively affect memory. Salvinorin A (Sal A), a structurally unusual KOPr agonist, has a reduced side effect profile compared to the more traditional KOPr agonists such as U50,488. The effect of Sal A and U50,488 on memory is controversial as they have both been shown to induce a memory impairment and also to improve memory impairments. Sal A also has a poor pharmacokinetic profile with a short duration of action. Structural analogues of Sal A have improved pharmacokinetic and side effect profiles compared to Sal A yet retain the analgesic and anti-addiction properties. This thesis will investigate whether Sal A analogues, namely Ethynyl Sal A (Ethy Sal A), Mesyl Salvinorin B (Mesyl Sal B), and Bromo Salvinorin A (Bromo Sal A), produce a memory impairment.  Male Sprague-Dawley rats were evaluated in the novel object recognition (NOR) task to determine whether novel Sal A analogues impair long term recognition memory. The degree of novelty was also investigated on a cellular basis through quantifying c-Fos immunoreactive neurons within the perirhinal cortex, an area of the brain shown to respond to novelty.  Acute administration of Sal A (0.3 and 1 mg/kg) and novel analogues Ethy Sal A (0.3 and 1 mg/kg), Mesyl Sal B (0.3 and 1 mg/kg), and Bromo Sal A (1 mg/kg) showed no significant differences compared to vehicle when tested in the NOR task. The prototypical KOPr agonist, U50,488 (10 mg/kg), produced a significant decrease in recognition index compared to vehicle when tested in the same task as the novel analogues. Correlating the recognition indices calculated from U50,488 in the NOR to c-Fos counts in the perirhinal cortex showed a strong positive correlation with an increase in RI relating to an increase in c-Fos activation. U50,488 (10 mg/kg) showed a non-significant trend compared to vehicle in the number of c-Fos immunoreactive cells within the perirhinal cortex.  Neither Sal A nor novel analogues affected NOR, suggesting no impairment of long term recognition memory. The lack of this side-effect, among others, demonstrates that the development of potent KOPr agonists with reduced side-effect profiles is feasible. These novel analogues show improvement over the traditional KOPr agonists.</p>


1990 ◽  
Vol 19 (305) ◽  
Author(s):  
Jens Palsberg ◽  
Michael I. Schwartzbach

We introduce substitution polymorphism as a new basis for typed object-oriented languages. While avoiding subtypes and polymorphic types, this mechanism enables optimal static type-checking of generic classes and polymorphic functions. The novel idea is to view a class as having a family of implicit subclasses, each of which is obtained through a substitution of types. This allows instantiation of a generic class to be merely subclassing and resolves the problems in the <em> Eiffel</em> type system reported by Cook. All subclasses, including the implicit ones, can reuse the code implementing their superclass.


2021 ◽  
pp. 188-230
Author(s):  
Virginia Hill ◽  
Alexandru Mardale

Chapter 7 adopts a cartographic representation of nominal phrases that provides the basis on which a formal analysis is developed for Romanian DOM. The gist is that the triggers for DOM operate within the nominal domain in Romanian (as in Sardinian and unlike Spanish), which accounts for the insensitivity of Romanian verbs to marked versus unmarked direct objects in the derivation of verb argument structures. Any additional processing of the DOM-ed DP on the verb spine responds to side-effect requirements for feature checking (e.g., the secondary licensing in Irimia 2019). This is in contrast with Spanish DOM, where the main trigger for DOM is merged on the verb spine, and it acts as a probe for a certain type of DP (i.e., those with an extra-layer at the left periphery).


Author(s):  
Murray Gell-Mann ◽  
Seth Lloyd

It would take a great many different concepts—or quantities—to capture all of our notions of what is meant by complexity (or its opposite, simplicity). However, the notion that corresponds most closely to what we mean by complexity in ordinary conversation and in most scientific discourse is "effective complexity." In nontechnical language, we can define the effective complexity (EC) of an entity as the length of a highly compressed description of its regularities [6, 7, 8]. For a more technical definition, we need a formal approach both to the notion of minimum description length and to the distinction between regularities and those features that are treated as random or incidental. We can illustrate with a number of examples how EC corresponds to our intuitive notion of complexity. We may call a novel complex if it has a great many different characters, scenes, subplots, and so on, so that the regularities of the novel require a long description. The United States tax code is complex, since it is very long and each rule in it is a regularity. Neckties may be simple, like those with regimental stripes, or complex, like some of those designed by Jerry Garcia. From time to time, an author presents a supposedly new measure of complexity (such as the "self-dissimilarity" of Wolpert and Macready [17]) without recognizing that when carefully defined it is just a special case of effective complexity. Like some other concepts sometimes identified with complexity, the EC of an entity is context-dependent, even subjective to a considerable extent. It depends on the coarse graining (level of detail) at which the entity is described, the language used to describe it, the previous knowledge and understanding that are assumed, and, of course, the nature of the distinction made between regularity and randomness. Like other proposed "measures of complexity," EC is most useful when comparing two entities, at least one of which has a large value of the quantity in question.


2012 ◽  
Vol 22 (1) ◽  
pp. 31-105 ◽  
Author(s):  
GAVIN M. BIERMAN ◽  
ANDREW D. GORDON ◽  
CĂTĂLIN HRIŢCU ◽  
DAVID LANGWORTHY

AbstractWe study a first-order functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and type-test (a Boolean expression testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; for example, intersection, union, negation, singleton, nullable, variant, and algebraic types are all derivable. We formulate a semantics in which expressions denote terms, and types are interpreted as first-order logic formulas. Subtyping is defined as valid implication between the semantics of types. The formulas are interpreted in a specific model that we axiomatize using standard first-order theories. On this basis, we present a novel type-checking algorithm able to eliminate many dynamic tests and to detect many errors statically. The key idea is to rely on a Satisfiability Modulo Theories solver to compute subtyping efficiently. Moreover, using a satisfiability modulo theories solver allows us to show the uniqueness of normal forms for non-deterministic expressions, provide precise counterexamples when type-checking fails, detect empty types, and compute instances of types statically and at run-time.


Viruses ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 2037
Author(s):  
Ayaka Washizaki ◽  
Megumi Murata ◽  
Yohei Seki ◽  
Masayuki Kikumori ◽  
Yinpui Tang ◽  
...  

The presence of latent human immunodeficiency virus (HIV) reservoirs is a major obstacle to a cure. The “shock and kill” therapy is based on the concept that latent reservoirs in HIV carriers with antiretroviral therapy are reactivated by latency-reversing agents (LRAs), followed by elimination due to HIV-associated cell death or killing by virus-specific cytotoxic T lymphocytes. Protein kinase C (PKC) activators are considered robust LRAs as they efficiently reactivate latently infected HIV. However, various adverse events hamper the intervention trial of PKC activators as LRAs. We found in this study that a novel PKC activator, 10-Methyl-aplog-1 (10MA-1), combined with an inhibitor of bromodomain and extra-terminal domain motifs, JQ1, strongly and synergistically reactivated latently infected HIV. Notably, higher concentrations of 10MA-1 alone induced the predominant side effect, i.e., global T cell activation as defined by CD25 expression and pro-inflammatory cytokine production in primary CD4+ T lymphocytes; however, JQ1 efficiently suppressed the 10MA-1-induced side effect in a dose-dependent manner. Considering the reasonable accessibility and availability of 10MA-1 since the chemical synthesis of 10MA-1 requires fewer processes than that of bryostatin 1 or prostratin, our results suggest that the combination of 10MA-1 with JQ1 may be a promising pair of LRAs for the clinical application of the “shock and kill” therapy.


2011 ◽  
Author(s):  
◽  
Mariano Méndez

The motivation of this work comes from a Global Climate Model (GCM) Software which was in great need of being updated. This software was implemented by scientists in the ’80s as a result of meteorological research. Written in Fortran 77, this program has been used as an input to make climate predictions for the Southern Hemisphere. The execution to get a complete numerical data set takes several days. This software has been programmed using a sequential processing paradigm. In these days, where multicore processors are so widespread, the time that an execution takes to get a complete useful data set can be drastically reduced using this technology. As a first objective to reach this goal of reengineering we must be able to understand the source code. An essential Fortran code characteristic is that old source code versions became unreadable, not comprehensive and sometimes “ejects” the reader from the source code. In that way, we can not modify, update or improve unreadable source code. Then, as a first step to parallelize this code we must update it, turn it readable and easy to understand. The GCM has a very complex internal structure. The program is divided into about 300 .f (Fortran 77) files. These files generally implement only one Fortran subroutine. Less than 10% of the files are used for common blocks and constants. Approximately 25% of the lines in the source code are comments. The total number of Fortran source code lines is 58000. A detailed work within the source code brings to light that [74]: 1 About 230 routines are called/used at run time. Most of the runtime is spent in routines located at deep levels 5 to 7 in the dynamic call graph from the main routine. 2 The routine with most of the runtime (the top routine from now on) requires more than 9% of the total program runtime and is called about 315000 times. 3 The top 10 routines (the 10 routines at the top of the flat profile) require about 50% of total runtime. Two of them are related to intrinsic Fortran functions. Our first approach was using a scripting language and Find & Replace tools trying to upgrade the source code, this kind of code manipulation do not guarantee preservation of software behavior. Then, our goal was to develop an automated tool to transform legacy software in more understandable, comprehensible and readable applying refactoring as main technique. At the same time a catalog of transformation to be applied in Fortran code is needed as a guide to programmers through this process.


Sign in / Sign up

Export Citation Format

Share Document