A SysML and CLEAN Based Methodology for RISC Processor Micro-Architecture Design

Author(s):  
Zakaria Lakhdara ◽  
Salah Merniz

Nowadays, processor micro-architectures are becoming more and more complex. Consequently, designers increasingly need powerful abstraction and structuration mechanisms, as well as design methodologies that automatically and formally derive low-level concrete designs from high-level abstract ones. In this context, this paper proposes a methodology for RISC processor micro-architecture design. The proposed methodology uses mainly SysML to model both ISA and MA levels and the functional language CLEAN to describe them. Functional specifications in CLEAN are automatically generated from the ISA and MA models. These specifications, which are executable and formally verifiable, are used for simulation and verification. The proposed approach is validated by a case study that consists of designing the micro-architecture of MIPS processor. It shows how to easily model and generate CLEAN specifications describing the ISA and MA levels. It also illustrates, with multiple cases, how the generated specifications are used to simulate the MA. The results of the simulation phase prove the efficiency of the proposed modeling and code generation techniques.

Author(s):  
O'Neil Davion Delpratt ◽  
Michael Kay

This paper attempts to analyze the performance benefits that are achievable by adding a code generation phase to an XSLT or XQuery engine. This is not done in isolation, but in comparison with the benefits delivered by high-level query rewriting. The two techniques are complementary and independent, but can compete for resources in the development team, so it is useful to understand their relative importance. We use the Saxon XSLT/XQuery processor as a case study, where we can now translate the logic of queries into Java bytecode. We provide an experimental evaluation of the performance of Saxon with the addition of this feature compared to the existing Saxon product. Saxon's Enterprise Edition already delivers a performance benefit over the open source product using the join optimizer and other features. What can we learn from these to achieve further performance gains through direct byte code generation?


Author(s):  
Fang Deng ◽  
◽  
Xinan Liu ◽  
Zhihong Peng ◽  
Jie Chen

With the development of low-level data fusion technology, threat assessment, which is a part of high-level data fusion, is recognized by an increasing numbers of people. However, the method to solve the problem of threat assessment for various kinds of targets and attacks is unknown. Hence, a threat assessment method is proposed in this paper to solve this problem. This method includes tertiary assessments: information classification, reorganization, and summary. In the tertiary assessments model, various threats with multi-class targets and attacks can be comprehensively assessed. A case study with specific algorithms and scenarios is shown to prove the validity and rationality of this method.


Author(s):  
Y. S. Kim ◽  
M. K. Kim ◽  
S. W. Lee ◽  
C. S. Lee ◽  
C. H. Lee ◽  
...  

Interior design of space is somewhat different from product design in view of followings: the space should afford the multiple users at the same time and afford appropriate interactions with human and objects which exist inside the space. This paper presents a case study of interior design of a conference room based on affordance concept. We analyzed all of users’ tasks in a conference room based on the human activities that are divided into human-object and human-human interactions. Function decomposition of an every object in conference room was conducted. The concept of a high-level function is used such as “configure the space” to satisfy the given condition of the number of humans, the types of conference, and so forth. The Function-Task Interaction (FTI) method was enhanced to analyze the interactions between functions and user tasks. Many low-level affordances were extracted, and high-level affordances such as enter/exitability, prepare-ability, present-ability, discuss-ability and conclude-ability were also extracted by grouping low-level affordances in the enhanced FTI matrix. In addition, the benchmarking simulation was conducted for several existing conference rooms and the results confirmed that the extracted affordances can be used for checklist and also for good guidance on interior design process.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Lerato Shikwambana ◽  
Venkataraman Sivakumar

The Council for Scientific and Industrial Research (CSIR) transportable Light Detection And Ranging (LIDAR) was used to collect data over Durban (29.9°S, 30.9°E) during 20–23 November 2012. Aerosol measurements have been carried out in the past over Durban; however, no cloud measurements using LIDAR have ever been performed. Therefore, this study further motivates the continuation of LIDAR for atmospheric research over Durban. Low level clouds were observed on 20–22 November 2012 and high level clouds were observed on 23 November 2012. The low level cloud could be classified as stratocumulus clouds, whereas the high level clouds could be classified as cirrus clouds. Low level cloud layers showed high extinction coefficients values ranging between 0.0009 and 0.0044 m−1, whereas low extinction coefficients for high level clouds were observed at values ranging between 0.000001 and 0.000002 m−1. Optical depth showed a high variability for 20 and 21 November 2012. This indicates a change in the composition and/or thickness of the cloud. For 22 and 23 November 2012, almost similar values of optical depth were observed. Cloud-Aerosol LIDAR and Infrared Pathfinder Satellite Observations (CALIPSO) revealed high level clouds while the CSIR LIDAR could not. However, the two instruments complement each other well to describe the cloudy condition.


1999 ◽  
Vol 6 (45) ◽  
Author(s):  
Torben Amtoft

We report on a case study in the application of partial evaluation, initiated<br />by the desire to speed up a constraint-based algorithm for control-flow<br /> analysis. We designed and implemented a dedicated partial evaluator,<br />able to specialize the analysis wrt. a given constraint graph and thus <br />remove the interpretive overhead, and measured it with Feeley's Scheme<br />benchmarks. Even though the gain turned out to be rather limited, our<br />investigation yielded valuable feed back in that it provided a better understanding<br />of the analysis, leading us to (re)invent an incremental version.<br />We believe this phenomenon to be a quite frequent spinoff from using <br />partial evaluation, since the removal of interpretive overhead makes the flow<br />of control more explicit and hence pinpoints sources of inefficiency. <br /> Finally, we observed that partial evaluation in our case yields such regular,<br />low-level specialized programs that it begs for run-time code generation.


2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-28
Author(s):  
Amanda Liu ◽  
Gilbert Louis Bernstein ◽  
Adam Chlipala ◽  
Jonathan Ragan-Kelley

We present a lightweight Coq framework for optimizing tensor kernels written in a pure, functional array language. Optimizations rely on user scheduling using series of verified, semantics-preserving rewrites. Unusually for compilation targeting imperative code with arrays and nested loops, all rewrites are source-to-source within a purely functional language. Our language comprises a set of core constructs for expressing high-level computation detail and a set of what we call reshape operators, which can be derived from core constructs but trigger low-level decisions about storage patterns and ordering. We demonstrate that not only is this system capable of deriving the optimizations of existing state-of-the-art languages like Halide and generating comparably performant code, it is also able to schedule a family of useful program transformations beyond what is reachable in Halide.


Author(s):  
Alessandra Bagnato ◽  
Imran Quadri ◽  
Etienne Brosse ◽  
Andrey Sadovykh ◽  
Leandro Soares Indrusiak ◽  
...  

This chapter presents the EU-funded MADES FP7 project that aims to develop an effective model-driven methodology to improve the current practices in the development of real-time embedded systems for avionics and surveillance industries. MADES developed an effective SysML/MARTE language subset, and a set of new tools and technologies that support high-level design specifications, validation, simulation, and automatic code generation, while integrating aspects such as component re-use. This chapter illustrates the MADES methodology by means of a car collision avoidance system case study; it presents the underlying MADES language, the design phases, and the set of tools supporting on one hand model verification and validation and, on the other hand, automatic code generation, which enables the implementation on execution platforms such as state-of-the-art FPGAs.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Marcel Stimberg ◽  
Romain Brette ◽  
Dan FM Goodman

Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.


Robotica ◽  
2004 ◽  
Vol 22 (2) ◽  
pp. 141-154 ◽  
Author(s):  
Sanghoon Yeo ◽  
Jinwook Kim ◽  
Sung Hee Lee ◽  
F. C. Park ◽  
Wooram Park ◽  
...  

We describe the design and implementation of RSTATION, an object-oriented, modular robot simulator with hierarchical analysis capabilities. Modularity is achieved via the features of design encapsulation and enables grouping a set of interconnected components into a single component and dividing the robot system into several sets of subordinate modules recursively. By careful construction of the data types and classes, RSTATION allows for hierarchical simulation of the kinematics, and the dynamics at three levels: considering only main links (high-level), using simplified models including dynamic properties of transmission elements (intermediate level), and taking into account the detailed kinematics and dynamics of transmission elements (low-level). Submodules can be set to different resolution during a single simulation. The data types and classes also exploit a recent set of coordinate invariant robot analysis algorithms based on modern screw theory. Central to the low-level dynamic analysis capability is an algorithm for systematically extracting the constraint equations for general gearing systems. The various features of RSTATION are illustrated with a detailed case study of a commercial industrial robot.


2017 ◽  
Vol 5 (1) ◽  
pp. 1-16
Author(s):  
Emna Kallel ◽  
Yassine Aoudni ◽  
Mohamed Abid

The complexity of embedded systems design is continuously augmented, due to the increasing quantity of components and distinct functionalities incorporated into a single system. To deal with this situation, abstraction level of projects is incessantly raised. In addition, techniques to accelerate the code production process have appeared. In this context, the automatic code generation is an interesting technique for the embedded systems project. This work presents an automatic VHDL code generation method based on the OpenMP parallel programming specification. In order to synthesize C code for loops into hardware, the authors applied the directives of OpenMP, which specifies portable implementations of shared memory parallel programs. A case study focused on the use of embedded systems for the DCT algorithm is presented in this paper to demonstrate the feasibility of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document