scholarly journals Advances in Automatic Software Verification: SV-COMP 2020

Author(s):  
Dirk Beyer

Abstract This report describes the 2020 Competition on Software Verification (SV-COMP), the 9$$^{\text {th}}$$ edition of a series of comparative evaluations of fully automatic software verifiers for C and Java programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 11 052 verification tasks for C programs and 416 verification tasks for Java programs. Each verification task consisted of a program and a property (reachability, memory safety, overflows, termination). SV-COMP 2020 had 28 participating verification systems from 11 countries.

Author(s):  
Dirk Beyer

AbstractSV-COMP 2021 is the 10th edition of the Competition on Software Verification (SV-COMP), which is an annual comparative evaluation of fully automatic software verifiers for C and Java programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on reproducibility of its results. The competition was based on 15 201 verification tasks for C programs and 473 verification tasks for Java programs. Each verification task consisted of a program and a property (reachability, memory safety, overflows, termination). SV-COMP 2021 had 30 participating verification systems from 27 teams from 11 countries.


Author(s):  
Dirk Beyer

AbstractThis report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.


DYNA ◽  
2020 ◽  
Vol 87 (213) ◽  
pp. 9-16
Author(s):  
Franklin Alexander Sepulveda Sepulveda ◽  
Dagoberto Porras-Plata ◽  
Milton Sarria-Paja

Current state-of-the-art speaker verification (SV) systems are known to be strongly affected by unexpected variability presented during testing, such as environmental noise or changes in vocal effort. In this work, we analyze and evaluate articulatory information of the tongue's movement as a means to improve the performance of speaker verification systems. We use a Spanish database, where besides the speech signals, we also include articulatory information that was acquired with an ultrasound system. Two groups of features are proposed to represent the articulatory information, and the obtained performance is compared to an SV system trained only with acoustic information. Our results show that the proposed features contain highly discriminative information, and they are related to speaker identity; furthermore, these features can be used to complement and improve existing systems by combining such information with cepstral coefficients at the feature level.


2017 ◽  
Vol 17 (3) ◽  
pp. 311-352 ◽  
Author(s):  
JAMES CHENEY ◽  
ALBERTO MOMIGLIANO

AbstractThe problem of mechanically formalizing and proving metatheoretic properties of programming language calculi, type systems, operational semantics, and related formal systems has received considerable attention recently. However, the dual problem of searching for errors in such formalizations has attracted comparatively little attention. In this article, we present αCheck, a bounded model checker for metatheoretic properties of formal systems specified using nominal logic. In contrast to the current state of the art for metatheory verification, our approach is fully automatic, does not require expertise in theorem proving on the part of the user, and produces counterexamples in the case that a flaw is detected. We present two implementations of this technique, one based onnegation-as-failureand one based onnegation elimination, along with experimental results showing that these techniques are fast enough to be used interactively to debug systems as they are developed.


2021 ◽  
pp. 20210469
Author(s):  
Peter Meidahl Petersen ◽  
N George Mikhaeel ◽  
Umberto Ricardi ◽  
Jessica L Brady

This status article describes current state-of-the-art radiotherapy for lymphomas and new emerging techniques. Current state-of-the-art radiotherapy is sophisticated, individualised, CT-based, intensity-modulated treatment, using PET/CT to define the target. The concept of involved site radiotherapy should be used, delineating the target using the exact same principles as for solid tumours. The optimal treatment delivery includes motion management and online treatment verification systems, which reduce intra- and interfractional anatomical variation. Emerging radiotherapy techniques in lymphomas include adaptive radiotherapy in MR- and CT-based treatment systems and proton therapy. The next generation linear accelerators have the capability to deliver adaptive treatment and allow relatively quick online adaptation to the daily variations of the anatomy. The computer systems use machine leaning to facilitate rapid automatic contouring of the target and organs-at-risk. Moreover, emerging MR-based planning and treatment facilities allow target definition directly from MR scans and allow intra-fractional tracking of structures recognisable on MR. Proton facilities are now being widely implemented. The benefits of proton therapy are due to the physical properties of protons, which in many cases allow sparing of normal tissue. The variety of techniques in modern radiotherapy means that the radiation oncologist must be able to choose the right technique for each patient. The choice is mainly based on experience and standard protocols, but new systems calculating risks for the patients with a specific treatment plan and also systems integrating clinical factors and risk factors into the planning process itself are emerging.


2020 ◽  
Vol 34 (09) ◽  
pp. 13576-13582
Author(s):  
Dusica Marijan ◽  
Arnaud Gotlieb

Machine learning has become prevalent across a wide variety of applications. Unfortunately, machine learning has also shown to be susceptible to deception, leading to errors, and even fatal failures. This circumstance calls into question the widespread use of machine learning, especially in safety-critical applications, unless we are able to assure its correctness and trustworthiness properties. Software verification and testing are established technique for assuring such properties, for example by detecting errors. However, software testing challenges for machine learning are vast and profuse - yet critical to address. This summary talk discusses the current state-of-the-art of software testing for machine learning. More specifically, it discusses six key challenge areas for software testing of machine learning systems, examines current approaches to these challenges and highlights their limitations. The paper provides a research agenda with elaborated directions for making progress toward advancing the state-of-the-art on testing of machine learning.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3704
Author(s):  
Wejdan L. Alyoubi ◽  
Maysoon F. Abulkhair ◽  
Wafaa M. Shalash

Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages—no-DR, mild, moderate, severe and proliferative DR—as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results.


1995 ◽  
Vol 38 (5) ◽  
pp. 1126-1142 ◽  
Author(s):  
Jeffrey W. Gilger

This paper is an introduction to behavioral genetics for researchers and practioners in language development and disorders. The specific aims are to illustrate some essential concepts and to show how behavioral genetic research can be applied to the language sciences. Past genetic research on language-related traits has tended to focus on simple etiology (i.e., the heritability or familiality of language skills). The current state of the art, however, suggests that great promise lies in addressing more complex questions through behavioral genetic paradigms. In terms of future goals it is suggested that: (a) more behavioral genetic work of all types should be done—including replications and expansions of preliminary studies already in print; (b) work should focus on fine-grained, theory-based phenotypes with research designs that can address complex questions in language development; and (c) work in this area should utilize a variety of samples and methods (e.g., twin and family samples, heritability and segregation analyses, linkage and association tests, etc.).


1976 ◽  
Vol 21 (7) ◽  
pp. 497-498
Author(s):  
STANLEY GRAND

2020 ◽  
Vol 17 (6) ◽  
pp. 847-856
Author(s):  
Shengbing Ren ◽  
Xiang Zhang

The problem of synthesizing adequate inductive invariants lies at the heart of automated software verification. The state-of-the-art machine learning algorithms for synthesizing invariants have gradually shown its excellent performance. However, synthesizing disjunctive invariants is a difficult task. In this paper, we propose a method k++ Support Vector Machine (SVM) integrating k-means++ and SVM to synthesize conjunctive and disjunctive invariants. At first, given a program, we start with executing the program to collect program states. Next, k++SVM adopts k-means++ to cluster the positive samples and then applies SVM to distinguish each positive sample cluster from all negative samples to synthesize the candidate invariants. Finally, a set of theories founded on Hoare logic are adopted to check whether the candidate invariants are true invariants. If the candidate invariants fail the check, we should sample more states and repeat our algorithm. The experimental results show that k++SVM is compatible with the algorithms for Intersection Of Half-space (IOH) and more efficient than the tool of Interproc. Furthermore, it is shown that our method can synthesize conjunctive and disjunctive invariants automatically


Sign in / Sign up

Export Citation Format

Share Document