scholarly journals Eliciting contextual requirements at design time: A case study

Author(s):  
Alessia Knauss ◽  
Daniela Damian ◽  
Kurt Schneider
Keyword(s):  

2019 ◽  
Vol 63 (5) ◽  
pp. 709-731
Author(s):  
Wallace Manzano ◽  
Valdemar Vicente Graciano Neto ◽  
Elisa Yumi Nakagawa

Abstract Systems-of-Systems (SoS) combine heterogeneous, independent systems to offer complex functionalities for highly dynamic smart applications. Besides their dynamic architecture with continuous changes at runtime, SoS should be reliable and work without interrupting their operation and with no failures that could cause accidents or losses. SoS architectural design should facilitate the prediction of the impact of architectural changes and potential failures due to SoS behavior. However, existing approaches do not support such evaluation. Hence, these systems have been usually built without a proper evaluation of their architecture. This article presents Dynamic-SoS, an approach to predict/anticipate at design time the SoS architectural behavior at runtime to evaluate whether the SoS can sustain their operation. The main contributions of this approach comprise: (i) characterization of the dynamic architecture changes via a set of well-defined operators; (ii) a strategy to automatically include a reconfiguration controller for SoS simulation; and (iii) a means to evaluate architectural configurations that an SoS could assume at runtime, assessing their impact on the viability of the SoS operation. Results of our case study reveal Dynamic-SoS is a promising approach that could contribute to the quality of SoS by enabling prior assessment of its dynamic architecture.



2016 ◽  
Vol 26 (01) ◽  
pp. 1750015 ◽  
Author(s):  
İsmail Koyuncu ◽  
İbrahim Şahin ◽  
Clay Gloster ◽  
Namık Kemal Sarıtekin

Artificial neural networks (ANNs) are implemented in hardware when software implementations are inadequate in terms of performance. Implementing an ANN as hardware without using design automation tools is a time consuming process. On the other hand, this process can be automated using pre-designed neurons. Thus, in this work, several artificial neural cells were designed and implemented to form a library of neurons for rapid realization of ANNs on FPGA-based embedded systems. The library contains a total of 60 different neurons, two-, four- and six-input biased and non-biased, with each having 10 different activation functions. The neurons are highly pipelined and were designed to be connected to each other like Lego pieces. Chip statistics of the neurons showed that depending on the type of the neuron, about 25 selected neurons can be fit in to the smallest Virtex-6 chip and an ANN formed using the neurons can be clocked up to 576.89[Formula: see text]MHz. ANN based Rössler system was constructed to show the effectiveness of using neurons in rapid realization of ANNs on embedded systems. Our experiments with the neurons showed that using these neurons, ANNs can rapidly be implemented as hardware and design time can significantly be reduced.





2009 ◽  
Vol 18 (03n04) ◽  
pp. 423-479 ◽  
Author(s):  
MARCO STUIT ◽  
NICK B. SZIRBIK

This paper presents the process-oriented aspects of a formal and visual agent-based business process modeling language. The language is of use for (networks of) organizations that elect or envisage multi-agent systems for the support of collaborative business processes. The paper argues that the design of a collaborative business process should start with a proper understanding of the work practice of the agents in the business domain under consideration. The language introduces a novel diagram to represent the wide range of (cross-enterprise) business interactions as a hierarchy of role-based interactions (including their ordering relations) in a tree structure. The behaviors owned by the agents playing the roles in the tree are specified in separate process diagrams. A collaborative business process studied in the context of a case study at a Dutch gas transport company is used to exemplify the modeling approach. Explicit (agent-based) process models can and should be verified using formal methods. In the business process community, design-time verification of a process design is considered vital in order to ensure the correctness and termination of a collaborative business process. The proposed modeling approach is enhanced with a design-time verification method. The direction taken in this research is to combine the interaction tree and the associated agent behaviors into a verifiable hierarchical colored Petri net in order to take advantage of its well-defined (execution) semantics and proven (computerized) verification techniques. The verification method presented in this paper consists of three steps: (1) the translation of the agent-based process design to a hierarchical colored Petri net, (2) the identification of process design errors, and (3) the correction and rollback of process design errors to the agent-based model. The translation technique has been implemented in a software tool that outputs the hierarchical colored Petri net in a format that can be loaded in the widely used CPN Tools software package. Verification results are discussed for the case study model.



2020 ◽  
Vol 17 (1) ◽  
pp. 293-313
Author(s):  
Saoussen Cheikhrouhou ◽  
Slim Kallel ◽  
Ikbel Guidara ◽  
Zakaria Maamar

Despite the prevalence of cloud and edge computing, ensuring the satisfaction of time-constrained business processes, remains challenging. Indeed, some cloud/edge-based resources might not be available when needed leading to delaying the execution of these processes? tasks and/or the transfer of these processes? data. This paper presents an approach for specifying, verifying, and deploying time constrained business processes in a mono-cloud, multi-edge context. First, the specification and verification of processes happen at design-time and run-time to ensure that these processes? tasks and data are continuously placed in a way that would mitigate the violation of time constraints. This mitigation might require moving tasks and/or data from one host to another to reduce time latency, for example. A host could be either a cloud, an edge, or any. Finally, the deployment of processes using a real case-study allowed to confirm the benefits of the early specification and verification of these processes in mitigating time constraints violations.



2021 ◽  
Vol 20 (5s) ◽  
pp. 1-26
Author(s):  
Guilherme Korol ◽  
Michael Guilherme Jordan ◽  
Mateus Beck Rutzig ◽  
Antonio Carlos Schneider Beck

FPGAs, because of their energy efficiency, reconfigurability, and easily tunable HLS designs, have been used to accelerate an increasing number of machine learning, especially CNN-based, applications. As a representative example, IoT Edge applications, which require low latency processing of resource-hungry CNNs, offload the inferences from resource-limited IoT end nodes to Edge servers featuring FPGAs. However, the ever-increasing number of end nodes pressures these FPGA-based servers with new performance and adaptability challenges. While some works have exploited CNN optimizations to alleviate inferences’ computation and memory burdens, others have exploited HLS to tune accelerators for statically defined optimization goals. However, these works have not tackled both CNN and HLS optimizations altogether; neither have they provided any adaptability at runtime, where the workload’s characteristics are unpredictable. In this context, we propose a hybrid two-step approach that, first, creates new optimization opportunities at design-time through the automatic training of CNN model variants (obtained via pruning) and the automatic generation of versions of convolutional accelerators (obtained during HLS synthesis); and, second, synergistically exploits these created CNN and HLS optimization opportunities to deliver a fully dynamic Multi-FPGA system that adapts its resources in a fully automatic or user-configurable manner. We implement this two-step approach as the AdaServ Framework and show, through a smart video surveillance Edge application as a case study, that it adapts to the always-changing Edge conditions: AdaServ processes at least 3.37× more inferences (using the automatic approach) and is at least 6.68× more energy-efficient (user-configurable approach) than original convolutional accelerators and CNN Models (VGG-16 and AlexNet). We also show that AdaServ achieves better results than solutions dynamically changing only the CNN model or HLS version, highlighting the importance of exploring both; and that it is always better than the best statically chosen CNN model and HLS version, showing the need for dynamic adaptability.



2014 ◽  
Vol 2014 ◽  
pp. 1-21
Author(s):  
Naveed Imran ◽  
Ronald F. DeMara

Distance-Ranked Fault Identification (DRFI)is a dynamic reconfiguration technique which employs runtime inputs to conduct online functional testing of fielded FPGA logic and interconnect resources without test vectors. At design time, a diverse set of functionally identical bitstream configurations are created which utilize alternate hardware resources in the FPGA fabric. An ordering is imposed on the configuration pool as updated by the PageRank indexing precedence. The configurations which utilize permanently damaged resources and hence manifest discrepant outputs, receive lower rank are thus less preferred for instantiation on the FPGA. Results indicate accurate identification of fault-free configurations in a pool of pregenerated bitstreams with a low number of reconfigurations and input evaluations. For MCNC benchmark circuits, the observed reduction in input evaluations is up to 75% when comparing the DRFI technique to unguided evaluation. The DRFI diagnosis method is seen to isolate all 14 healthy configurations from a pool of 100 pregenerated configurations, and thereby offering a 100% isolation accuracy provided the fault-free configurations exist in the design pool. When a complete recovery is not feasible, graceful degradation may be realized which is demonstrated by the PSNR improvement of images processed in a video encoder case study.



2019 ◽  
Vol 28 (2) ◽  
pp. 505-534 ◽  
Author(s):  
Darius Sas ◽  
Paris Avgeriou

AbstractThe embedded systems domain has grown exponentially over the past years. The industry is forced by the market to rapidly improve and release new products to beat the competition. Frenetic development rhythms thus shape this domain and give rise to several new challenges for software design and development. One of them is dealing with trade-offs between run-time and design-time quality attributes. To study practices, processes and tools concerning the management of run-time and design-time quality attributes as well as the trade-offs among them from the perspective of embedded systems software engineers. An exploratory case study with two qualitative data collection steps, namely interviews and a focus group, involving six different companies from the embedded systems domain with a total of twenty participants. The interviewed subjects showed a preference for run-time over design-time qualities. Trade-offs between design-time and run-time qualities are very common, but they are often implicit, due to the lack of adequate monitoring tools and practices. Practitioners prefer to deal with trade-offs in the most lightweight way possible, by applying ad-hoc practices, thus avoiding any overhead incurred. Finally, practitioners have elaborated on how they envision the ideal tool support for dealing with trade-offs. Although it is notoriously difficult to deal with trade-offs, constantly monitoring the quality attributes of interest with automated tools is key in making explicit and prudent trade-offs and mitigating the risk of incurring technical debt.





Author(s):  
Milan Mišovič ◽  
Oldřich Faldík

To create computarization target software as a component system has been a very strong requirement for the last 20 years of software developing. Finally, the architectural components are self-contained units, presenting not only partial and overall system behavior, but also cooperating with each other on the basis of their interfaces. Among others, components have allowed flexible modification of processes the behavior of which is the foundation of components behavior without changing the life of the component system. On the other hand, the component system makes it possible, at design time, to create numerous new connections between components and thus creating modified system behaviors. This all enables the company management to perform, at design time, required behavioral changes of processes in accordance with the requirements of changing production and market.The development of software which is generally referred to as SDP (Software Development Process) contains two directions. The first one, called CBD (Component–Based Development), is dedicated to the development of component–based systems CBS (Component–based System), the second target is the development of software under the influence of SOA (Service–Oriented Architecture). Both directions are equipped with their different development methodologies. The subject of this paper is only the first direction and application of development of component–based systems in its object–oriented methodologies. The requirement of today is to carry out the development of component-based systems in the framework of developed object–oriented methodologies precisely in the way of a dominant style. In some of the known methodologies, however, this development is not completely transparent and is not even recognized as dominant. In some cases, it is corrected by the special meta–integration models of component system development into an object methodology.This paper presents a case study applied to the process management fragment of a human resources HR (Human Resources) domain in a small manufacturing business enterprise, which confirms the success of the meta-model implementation mentioned in the contribution (Mišovič, Faldík, 2013).



Sign in / Sign up

Export Citation Format

Share Document