scholarly journals Flow Analysis of Lambda Expressions

1981 ◽  
Vol 10 (128) ◽  
Author(s):  
Neil D. Jones

<p>We describe a method to analyze the data and control flow during mechanical evaluation of lambda expressions. The method produces a finite approximate description of the set of all states entered by a call-by-value lambda-calculus interpreter; a similar approach can easily be seen to work for call-by-name. A proof is given that the approximation is ''safe'' i.e. that it includes descriptions of every intermediate lambda-expression which occurs in the evaluation.</p><p>From a programming languages point of view the method extends previously developed interprocedural analysis methods to include both local and global variables, call-by-name or call-by-value parameter transmission and the use of procedures both as arguments to other procedures and as the results returned by them.</p>

Author(s):  
Bing Qiao ◽  
Hongji Yang ◽  
Alan O’Callaghan

When developing a software system, there are a number of principles, paradigms, and tools available to choose from. For a specific platform or programming language, a standard way can usually be found to archive the ultimate system; for example, a combination of an incremental development process, object-oriented analysis and design, and a well supported CASE (Computer-Aided Software Engineering) tool. Regardless of the technology to be adopted, the final outcome of the software development is always a working software system. However, when it comes to software reengineering, there is rather less consensus on either approaches or outcomes. Shall we use black-box or white-box reverse engineering for program understanding? Shall we produce data and control flow graphs, or some kind of formal specifications as the output of analysis? Each of these techniques has its pros and cons of tackling various software reengineering problems, and none of them on its own suffices to a whole reengineering project. A proper integration of various techniques capable of solving a specific issue could be an effective way to unravel a complicated software system. This kind of integration has to be done from an architectural point of view. One of the most exciting outcomes of recent efforts on software architecture is the Object Management Group’s (OMG) Model-Driven Architecture (MDA). MDA provides a unified framework for developing middleware-based modern distributed systems, and also a definite goal for software reengineering. This chapter presents a unified software reengineering methodology based on Model-Driven Architecture, which consists of a framework, a process, and related techniques.


2003 ◽  
Vol 12 (01) ◽  
pp. 1-36 ◽  
Author(s):  
MARIE JOSÉ BLIN ◽  
JACQUES WAINER ◽  
CLAUDIA BAUZER MEDEIROS

This paper presents a new formalism for workflow process definition, which combines research in programming languages and in database systems. This formalism is based on creating a library of workflow building blocks, which can be progressively combined and nested to construct complex workflows. Workflows are specified declaratively, using a simple high level language, which allows the dynamic definition of exception handling and events, as well as dynamically overriding workflow definition. This ensures a high degree of flexibility in data and control flow specification, as well as in reuse of workflow specifications to construct other workflows. The resulting workflow execution environment is well suited to supporting cooperative work.


2014 ◽  
Vol 16 (1) ◽  
pp. 77-84
Author(s):  
Erika Asnina ◽  
Begoña Cristina Pelayo García-Bustelo

Abstract The perspective on integration of two mathematical formalisms, i.e., Colored Petri Nets (CPNs) and Topological Functioning Model (TFM), is discussed in the paper. The roots of CPNs are in modeling system functionality. The TFM joins principles of system theory and algebraic topology, and formally bridges the solution domain with the problem domain. It is a base for further automated construction of software design models. The paper discusses a perspective on check of control and data flows in the TFM by CPNs formalism. The research result is definition of mappings from TFMs to CPNs.


Author(s):  
THOMAS GILRAY ◽  
MICHAEL D. ADAMS ◽  
MATTHEW MIGHT

AbstractIn higher order settings, control-flow analysis aims to model the propagation of both data and control by finitely approximating program behaviors across all possible executions. The polyvariance of an analysis describes the number of distinct abstract representations, or variants, for each syntactic entity (e.g., functions, variables, or intermediate expressions). Monovariance, one of the most basic forms of polyvariance, maintains only a single abstract representation for each variable or expression. Other polyvariant strategies allow a greater number of distinct abstractions and increase analysis complexity with the aim of increasing analysis precision. For example, k-call sensitivity distinguishes flows by the most recent k call sites, k-object sensitivity by a history of allocation points, and argument sensitivity by a tuple of dynamic argument types. From this perspective, even a concrete operational semantics may be thought of as an unboundedly polyvariant analysis. In this paper, we develop a unified methodology that fully captures this design space. It is easily tunable and guarantees soundness regardless of how tuned. We accomplish this by extending the method of abstracting abstract machines, a systematic approach to abstract interpretation of operational abstract-machine semantics. Our approach permits arbitrary instrumentation of the underlying analysis and arbitrary tuning of an abstract-allocation function. We show that the design space of abstract allocators both unifies and generalizes existing notions of polyvariance. Simple changes to the behavior of this function recapitulate classic styles of analysis and yield novel combinations and variants.


Author(s):  
Oren Ish-Shalom ◽  
Shachar Itzhaky ◽  
Noam Rinetzky ◽  
Sharon Shoham

AbstractDetermining upper bounds on the time complexity of a program is a fundamental problem with a variety of applications, such as performance debugging, resource certification, and compile-time optimizations. Automated techniques for cost analysis excel at bounding the resource complexity of programs that use integer values and linear arithmetic. Unfortunately, they fall short when execution traces become more involved, esp. when data dependencies may affect the termination conditions of loops. In such cases, state-of-the-art analyzers have shown to produce loose bounds, or even no bound at all.We propose a novel technique that generalizes the common notion of recurrence relations based on ranking functions. Existing methods usually unfold one loop iteration, and examine the resulting relations between variables. These relations assist in establishing a recurrence that bounds the number of loop iterations. We propose a different approach, where we derive recurrences by comparing whole traces with whole traces of a lower rank, avoiding the need to analyze the complexity of intermediate states. We offer a set of global properties, defined with respect to whole traces, that facilitate such a comparison, and show that these properties can be checked efficiently using a handful of local conditions. To this end, we adapt state squeezers, an induction mechanism previously used for verifying safety properties. We demonstrate that this technique encompasses the reasoning power of bounded unfolding, and more. We present some seemingly innocuous, yet intricate, examples where previous tools based on cost relations and control flow analysis fail to solve, and that our squeezer-powered approach succeeds.


Sign in / Sign up

Export Citation Format

Share Document