scholarly journals On the Use of i* for Architecting Hybrid Systems: A Method and an Evaluation Report

Author(s):  
Juan Pablo Carvallo ◽  
Xavier Franch
10.29007/7hvk ◽  
2018 ◽  
Author(s):  
Taylor T. Johnson

This report presents the results of the repeatability evaluation for a friendly competition for formal verification of continuous and hybrid systems. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2017. In its first edition, thirteen tools have been applied to solve benchmark problems for the six competition categories, of which, ten tools were evaluated and passed the repeatability evaluation. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable. These re- sults probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems up to this date.


10.29007/n9t3 ◽  
2018 ◽  
Author(s):  
Taylor T. Johnson

This report presents the results of the repeatability evaluation for the 2nd International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP'18). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2018. In its second edition, twenty-five tools submitted artifacts for the repeatability evaluation and applied to solve benchmark problems for seven competition categories. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.


10.29007/wbl3 ◽  
2019 ◽  
Author(s):  
Taylor T. Johnson

This report presents the results of the repeatability evaluation for the 3rd International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP'19). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2019, affiliated with the Cyber-Physical Systems and Internet of Things (CPS-IoT Week'19). In its third edition, twenty-five tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve benchmark problems for eight competition categories. The majority of participants adhered to new requirements for this year's repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker). The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.


10.29007/8dp4 ◽  
2020 ◽  
Author(s):  
Taylor T Johnson

This report presents the results of the repeatability evaluation for the 4th International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’20). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2020, affiliated with the IFAC World Congress. In its fourth edition, twenty-eight tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve benchmark problems for seven competition categories. The majority of participants adhered to the requirements for this year’s repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker), and several categories used performance evaluation information from a common execution platform. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.


1979 ◽  
Author(s):  
Walter Batchelor
Keyword(s):  

Author(s):  
Paul J. Turnbull ◽  
Tim McSweeney ◽  
Russell Webster ◽  
Mark Edmunds ◽  
Mike Hough

Sign in / Sign up

Export Citation Format

Share Document