verification algorithms
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 10)

H-INDEX

13
(FIVE YEARS 1)

Processes ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 482
Author(s):  
Majid Ghaniee Zarch ◽  
Vicenç Puig ◽  
Javad Poshtan ◽  
Mahdi Aliyari Shoorehdeli

The development of efficient methods for process performance verification has drawn a lot of attention in the research community. Viability theory is a mathematical tool to identify the trajectories of a dynamical system which remains in a constraint set. In this paper, viability theory is investigated for this purpose in the case of nonlinear processes that can be represented in Linear Parameter Varying (LPV) form. In particular, verification algorithms based on the use of invariance and viability kernels and capture basin are proposed. The difficulty with the application of this theory is the computation of these sets. A Lagrangian method has been used to approximate these sets. Because of simplicity and efficient computations, zonotopes are adopted for set representation. Two new sets called Safe Work Area (SWA) and Required Performance (RP) are defined and an algorithm is proposed to use these concepts for the verification purpose. Finally, two application examples based on well-known case studies, a two-tank system and PH neutralization plant, are provided to show the effectiveness of the proposed method.


2021 ◽  
Vol 162 (2) ◽  
pp. 61-68
Author(s):  
Erika Sinka Lászlóné Adamik ◽  
Péter Hári ◽  
Anikó Póth ◽  
Ágnes Zorándi ◽  
Anna Bradák ◽  
...  

Összefoglaló. Bevezetés: A Nemzeti Szívinfarktus Regiszterben 111 788 beteg 122 351 infarktusos eseményéhez kapcsolódó 145 292 kezelés adatai szerepelnek. Módszer: A rögzített adatokat az üzemeltetők folyamatosan kontrollálják, bemutatják azokat a minőségbiztosítási módszereket, amelyekkel az adatbázis teljességét és megfelelőségét biztosítják. Az online informatikai rendszerben az adatbevitel során 119 automatikus ellenőrzési algoritmust működtetnek. Az automatikus ellenőrzési algoritmussal nem kezelhető adatok ellenőrzését 5 részállású, egészségügyi képzettségű kontroller és 2 főállású munkatárs végzi. A regiszter működése során folyamatosan fejlesztették az ellenőrzés módszereit, ennek során 2018-tól a kontrollerek által ellenőrzött adatlapok utóellenőrzésére is sor kerül. Az utóellenőrzés során a már ellenőrzött adatlapok 2,4%-ában további javításra volt szükség. Eredmények: Az utóellenőrzés eredménye, hogy a kontrolleri munkát hatékonyabbá sikerült tenni, mivel egyre kevesebb az utóellenőrzés során hibásnak talált adatlapok száma. Megvizsgálták, hogy az adatlap kérdéseire milyen arányban kaptak értékelhető választ. Az értékelhető válaszok aránya a legtöbb esetben meghaladta a 90%-ot, azonban a panaszok kezdetének ideje az adatlapok 39%-ában volt megadva, míg a dohányzási szokásokkal kapcsolatos válasz az esetek 59%-ában volt megfelelő. Megbeszélés: A szerzők rámutatnak arra, hogy a Nemzeti Egészségbiztosítási Alapkezelő és a Nemzeti Szívinfarktus Regiszter adatbázisának folyamatos egyeztetése hozzájárul a regisztráció teljességének biztosításához, lehetővé teszi a betegek állapotának hosszú távú követését. Miután a program kötelező jellegűvé vált 2014. 01. 01-jén, az első évben a szívinfarktus-diagnózissal finanszírozott betegek kétharmada (67%) szerepelt a regiszter adatbázisában; ez az arány a 2017–2019-es években meghaladta a 90%-ot (91,7–93,6–91,3%). Következtetés: Vizsgálatukból a szerzők azt a következtetést vonják le, hogy a betegségregiszter működése során szükséges az adatok teljességének és megfelelőségének folyamatos ellenőrzése. A regiszter adatbázisának 90% feletti teljessége az ellátórendszer minőségi paramétereinek folyamatos követését teszi lehetővé. Orv Hetil. 2021; 162(2): 61–68. Summary. Introduction: The Hungarian Myocardial Infarction Registry contains data on 145 592 treatments related to the 111 788 patients and the 122 351 myocardial infarctions. Method: The recorded information is continuously monitored, and the quality assurance methods used to ensure the completeness and adequacy of the database are presented. In the online IT system, 119 automatic verification algorithms are operated during data entry. Data that cannot be handled by the automated verification algorithm is checked by five part-time health-qualified controllers and two full-time employees. During the operation of the register, the control methods were continuously developed, during which the data sheets checked by the controllers will be post-checked from 2018 onwards. During the post-checked process, 2.4% of the datasheets required further correction. Results: The number of data sheets found to be incorrect during the post-audit was decreasing. The authors examined the proportion of evaluable answers to the questionnaire. The rate of evaluable responses was over 90% in most cases; however, the time of the onset of symptoms was given in 39% of the datasheets, while the answer to smoking habits was adequate in 59% of cases. Discussion: The authors point out that the continuous consultation of the database of the National Health Fund Management Centre and the Hungarian Myocardial Infarction Registry contributes to ensuring the completeness of registration, enabling long-term monitoring of the condition of patients. In the first year of the mandatory period of the program, two-thirds (67%) of patients treated with a diagnosis of myocardial infarction were included in the registry database, and this proportion exceeded 90% in the years 2017–2019 (91.7–93.6–91.3%). Conclusion: The study of the authors concludes that the completeness and adequacy of the data need to be constantly monitored during the operation of the patient registry. The integrity of the register database above 90% enables the continuous monitoring of the quality parameters of the system. Orv Hetil. 2021; 162(2): 61–68.


Author(s):  
David Shriver ◽  
Sebastian Elbaum ◽  
Matthew B. Dwyer

AbstractDespite the large number of sophisticated deep neural network (DNN) verification algorithms, DNN verifier developers, users, and researchers still face several challenges. First, verifier developers must contend with the rapidly changing DNN field to support new DNN operations and property types. Second, verifier users have the burden of selecting a verifier input format to specify their problem. Due to the many input formats, this decision can greatly restrict the verifiers that a user may run. Finally, researchers face difficulties in re-using benchmarks to evaluate and compare verifiers, due to the large number of input formats required to run different verifiers. Existing benchmarks are rarely in formats supported by verifiers other than the one for which the benchmark was introduced. In this work we present DNNV, a framework for reducing the burden on DNN verifier researchers, developers, and users. DNNV standardizes input and output formats, includes a simple yet expressive DSL for specifying DNN properties, and provides powerful simplification and reduction operations to facilitate the application, development, and comparison of DNN verifiers. We show how DNNV increases the support of verifiers for existing benchmarks from 30% to 74%.


Heliyon ◽  
2020 ◽  
Vol 6 (12) ◽  
pp. e05808
Author(s):  
Anastasia N. Katsaounidou ◽  
Antonios Gardikiotis ◽  
Nikolaos Tsipas ◽  
Charalampos A. Dimoulas

Author(s):  
Dirk Beyer ◽  
Philipp Wendler

Abstract Verification algorithms are among the most resource-intensive computation tasks. Saving energy is important for our living environment and to save cost in data centers. Yet, researchers compare the efficiency of algorithms still in terms of consumption of CPU time (or even wall time). Perhaps one reason for this is that measuring energy consumption of computational processes is not as convenient as measuring the consumed time and there is no sufficient tool support. To close this gap, we contribute CPU Energy Meter, a small tool that takes care of reading the energy values that Intel CPUs track inside the chip. In order to make energy measurements as easy as possible, we integrated CPU Energy Meter into BenchExec, a benchmarking tool that is already used by many researchers and competitions in the domain of formal methods. As evidence for usefulness, we explored the energy consumption of some state-of-the-art verifiers and report some interesting insights, for example, that energy consumption is not necessarily correlated with CPU time.


2020 ◽  
Vol 2 (4) ◽  
pp. 177-187
Author(s):  
Weipeng Cao ◽  
Zhongwu Xie ◽  
Xiaofei Zhou ◽  
Zhiwu Xu ◽  
Cong Zhou ◽  
...  

Robotica ◽  
2019 ◽  
Vol 38 (3) ◽  
pp. 512-530
Author(s):  
Kala Rahul

SummaryMission planning is a complex motion planning problem specified by using Temporal Logic constituting of Boolean and temporal operators, typically solved by model verification algorithms with an exponential complexity. The paper proposes co-evolutionary optimization thus building an iterative solution to the problem. The language for mission specification is generic enough to represent everyday missions, while specific enough to design heuristics. The mission is broken into components which cooperate with each other. The experiments confirm that the robot is able to outperform the search, evolutionary and model verification techniques. The results are demonstrated by using a Pioneer LX robot.


2019 ◽  
Vol 43 (1) ◽  
pp. 72-88 ◽  
Author(s):  
Olga Papadopoulou ◽  
Markos Zampoglou ◽  
Symeon Papadopoulos ◽  
Ioannis Kompatsiaris

Purpose As user-generated content (UGC) is entering the news cycle alongside content captured by news professionals, it is important to detect misleading content as early as possible and avoid disseminating it. The purpose of this paper is to present an annotated dataset of 380 user-generated videos (UGVs), 200 debunked and 180 verified, along with 5,195 near-duplicate reposted versions of them, and a set of automatic verification experiments aimed to serve as a baseline for future comparisons. Design/methodology/approach The dataset was formed using a systematic process combining text search and near-duplicate video retrieval, followed by manual annotation using a set of journalism-inspired guidelines. Following the formation of the dataset, the automatic verification step was carried out using machine learning over a set of well-established features. Findings Analysis of the dataset shows distinctive patterns in the spread of verified vs debunked videos, and the application of state-of-the-art machine learning models shows that the dataset poses a particularly challenging problem to automatic methods. Research limitations/implications Practical limitations constrained the current collection to three platforms: YouTube, Facebook and Twitter. Furthermore, there exists a wealth of information that can be drawn from the dataset analysis, which goes beyond the constraints of a single paper. Extension to other platforms and further analysis will be the object of subsequent research. Practical implications The dataset analysis indicates directions for future automatic video verification algorithms, and the dataset itself provides a challenging benchmark. Social implications Having a carefully collected and labelled dataset of debunked and verified videos is an important resource both for developing effective disinformation-countering tools and for supporting media literacy activities. Originality/value Besides its importance as a unique benchmark for research in automatic verification, the analysis also allows a glimpse into the dissemination patterns of UGC, and possible telltale differences between fake and real content.


Sign in / Sign up

Export Citation Format

Share Document