A REUSE-ORIENTED WORKFLOW DEFINITION LANGUAGE

2003 ◽  
Vol 12 (01) ◽  
pp. 1-36 ◽  
Author(s):  
MARIE JOSÉ BLIN ◽  
JACQUES WAINER ◽  
CLAUDIA BAUZER MEDEIROS

This paper presents a new formalism for workflow process definition, which combines research in programming languages and in database systems. This formalism is based on creating a library of workflow building blocks, which can be progressively combined and nested to construct complex workflows. Workflows are specified declaratively, using a simple high level language, which allows the dynamic definition of exception handling and events, as well as dynamically overriding workflow definition. This ensures a high degree of flexibility in data and control flow specification, as well as in reuse of workflow specifications to construct other workflows. The resulting workflow execution environment is well suited to supporting cooperative work.

2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


Author(s):  
Muhammad Shumail Naveed ◽  
Muhammad Sarim ◽  
Kamran Ahsan

Programming is the core of computer science and due to this momentousness a special care is taken in designing the curriculum of programming courses. A substantial work has been conducted on the definition of programming courses, yet the introductory programming courses are still facing high attrition, low retention and lack of motivation. This paper introduced a tiny pre-programming language called LPL (Learners Programming Language) as a ZPL (Zeroth Programming Language) to illuminate novice students about elementary concepts of introductory programming before introducing the first imperative programming course. The overall objective and design philosophy of LPL is based on a hypothesis that the soft introduction of a simple and paradigm specific textual programming can increase the motivation level of novice students and reduce the congenital complexities and hardness of the first programming course and eventually improve the retention rate and may be fruitful in reducing the dropout/failure level. LPL also generates the equivalent high level programs from user source program and eventually very fruitful in understanding the syntax of introductory programming languages. To overcome the inherent complexities of unusual and rigid syntax of introductory programming languages, the LPL provide elementary programming concepts in the form of algorithmic and plain natural language based computational statements. The initial results obtained after the introduction of LPL are very encouraging in motivating novice students and improving the retention rate.


Author(s):  
Sofia K. Georgiadis ◽  
Andrew Comba

The concept of operations for NYCT systems is changing as a result of Automatic Train Supervision (ATS) Communications-Based Train Control (CBTC), and Solid State Interlocking (SSI) deployment. Train dispatchers are dealing with a higher degree of automation with ATS systems; and similarly, train operators are adjusting to a split between automated and manual processes with CBTC systems. The emerging CBTC and SSI systems are becoming Information Technology (IT) infrastructure and digital-control based. While CBTC is increasing the overall safety of the signaling system, it is also increasing system complexity, especially from an analysis point of view. These issues are addressed at NYCT by the implementation of DoDAF, which the U.S. Department of Defense Architecture Framework, an Enterprise Architecture. This paper discusses VSI’s application of DoDAF with a focus on the safety certification mission. It begins with an overview of DoDAF, followed by a description of Views and Product-models, the building blocks of DoDAF. Each section presents a high-level description of each View, along with exemplary Product-model descriptions, 1 or 2 per View. In addition, two system capability requirements, Safe Train Separation and Control Speed to Restriction Limits, are examined and mapped throughout the model.


2020 ◽  
Vol 245 ◽  
pp. 05004
Author(s):  
Rosen Matev ◽  
Niklas Nolte ◽  
Alex Pearce

For Run 3 of the Large Hadron Collider, the final stage of the LHCb experiment’s high-level trigger must process 100 GB/s of input data. This corresponds to an input rate of 1 MHz, and is an order of magnitude larger compared to Run 2. The trigger is responsible for selecting all physics signals that form part of the experiment’s broad research programme, and as such defines thousands of analysis-specific selections that together comprise tens of thousands of algorithm instances. The configuration of such a system needs to be extremely flexible to be able to handle the large number of different studies it must accommodate. However, it must also be robust and easy to understand, allowing analysts to implement and understand their own selections without the possibility of error. A Python-based system for configuring the data and control flow of the Gaudi-based trigger application is presented. It is designed to be user-friendly by using functions for modularity and removing indirection layers employed previously in Run 2. Robustness is achieved by reducing global state and instead building the data flow graph in a functional manner, whilst keeping configurability of the full call stack.


2020 ◽  
Author(s):  
Andrei Boutyline ◽  
Laura Soter

Cultural schemas are a central cognitive mechanism through which culture affects action. In this manuscript, we develop a theoretical model of cultural schemas that is better able to support empirical work, including inferential, sensitizing, and operational uses. We propose a multilevel framework centered on a high-level definition of cultural schemas that is sufficiently broad to capture its major sociological uses but still sufficiently narrow to identify a set of cognitive phenomena with key functional properties in common: cultural schemas are socially shared heuristic representations deployable in automatic cognition. We use this conception to elaborate the main theoretical properties of cultural schemas, and to provide clear criteria that distinguish them from other cultural or cognitive elements. We then propose a series of concrete tests empirical scholarship can use to determine if these properties apply. We also demonstrate how this approach can identify potentially faulty theoretical inferences present in existing work. Then, moving to a lower level of analysis, we elaborate how cultural schemas can be algorithmically conceptualized in terms of their building blocks. This leads us to recommend improvements to methods for measuring cultural schemas. We conclude by proposing fruitful sensitizing questions for future scholarship.


2017 ◽  
Vol 73 (3) ◽  
Author(s):  
Detlev L. Tönsing

The transition from pre-human to human has, for a long time, been associated with tool use and construction. The implicit self-definition of humans in this is that of planned control over life world. This is reflected on in the work of Hanna Arendt on the homo faber and the novel by Max Frisch of that name. However, this definition has become problematic in a number of ways: Planned tool use has been seen to occur outside the human species, and the focus on control of the environment has become suspect because of the environmental crisis. The burial practices of Homo naledi indicate high-level self-awareness and social communication, with little tool use being evident. This article asks whether this might be an occasion to redefine our conception of what it means to be human away from the focus on mastery and control and towards including trust, also religious trust, as the true mark of humanity.


2012 ◽  
Vol 20 (4) ◽  
pp. 359-377 ◽  
Author(s):  
Mikołaj Baranowski ◽  
Adam Belloum ◽  
Marian Bubak ◽  
Maciej Malawski

For programming and executing complex applications on grid infrastructures, scientific workflows have been proposed as convenient high-level alternative to solutions based on general-purpose programming languages, APIs and scripts. GridSpace is a collaborative programming and execution environment, which is based on a scripting approach and it extends Ruby language with a high-level API for invoking operations on remote resources. In this paper we describe a tool which enables to convert the GridSpace application source code into a workflow representation which, in turn, may be used for scheduling, provenance, or visualization. We describe how we addressed the issues of analyzing Ruby source code, resolving variable and method dependencies, as well as building workflow representation. The solutions to these problems have been developed and they were evaluated by testing them on complex grid application workflows such as CyberShake, Epigenomics and Montage. Evaluation is enriched by representing typical workflow control flow patterns.


Author(s):  
BERNHARD WESTFECHTEL

Due to increasing complexity of hardware and software systems, configuration management has been receiving more and more attention in nearly all engineering domains (e.g. electrical, mechanical, and software engineering). This observation has driven us to develop a domain-independent and adaptable configuration management model (called CoMa) for managing systems of engineering design documents. The CoMa model integrates composition hierarchies, dependencies, and versions into a coherent framework based on a sparse set of essential configuration management concepts. In order to give a clear and comprehensible specification, the CoMa model is defined in a high-level, multi-paradigm specification language (PROGRES) which combines concepts from various disciplines (database systems, knowledge-based systems, graph rewriting systems, programming languages). Finally, we also present an implementation which conforms to the formal specification and provides graphical, structure-oriented tools offering a bunch of sophisticated commands and operating in a heterogeneous environment.


2021 ◽  
pp. 000312242110245
Author(s):  
Andrei Boutyline ◽  
Laura K. Soter

Cultural schemas are a central cognitive mechanism through which culture affects action. In this article, we develop a theoretical model of cultural schemas that is better able to support empirical work, including inferential, sensitizing, and operational uses. We propose a multilevel framework centered on a high-level definition of cultural schemas that is sufficiently broad to capture its major sociological applications but still sufficiently narrow to identify a set of cognitive phenomena with key functional properties in common: cultural schemas are socially shared representations deployable in automatic cognition. We use this conception to elaborate the main theoretical properties of cultural schemas, and to provide clear criteria that distinguish them from other cultural or cognitive elements. We then propose a series of concrete tests empirical scholarship can use to determine if these properties apply. We also demonstrate how this approach can identify potentially faulty theoretical inferences present in existing work. Moving to a lower level of analysis, we elaborate how cultural schemas can be algorithmically conceptualized in terms of their building blocks. This leads us to recommend improvements to methods for measuring cultural schemas. We conclude by outlining questions for a broader research program.


Sign in / Sign up

Export Citation Format

Share Document