scholarly journals Introduction to Neural Network Verification

2021 ◽  
Vol 7 (1–2) ◽  
pp. 1-157
Author(s):  
Aws Albarghouthi



Author(s):  
Hoang-Dung Tran ◽  
Diego Manzanas Lopez ◽  
Xiaodong Yang ◽  
Patrick Musau ◽  
Luan Viet Nguyen ◽  
...  




2021 ◽  
Author(s):  
Aws Albarghouthi


Author(s):  
Yizhak Yisrael Elboher ◽  
Justin Gottschlich ◽  
Guy Katz


Author(s):  
Pengfei Yang ◽  
Renjue Li ◽  
Jianlin Li ◽  
Cheng-Chao Huang ◽  
Jingyi Wang ◽  
...  

AbstractWe propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze the network. If the robustness property cannot be verified, the result is inconclusive. Due to the over-approximation, the computed region in the abstraction may be spurious in the sense that it does not contain any true counterexample. Our goal is to identify such spurious regions and use them to guide the abstraction refinement. The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons. This is achieved by linear programming techniques. With the new bounds, we iteratively apply DeepPoly, aiming to eliminate spurious regions. We have implemented our approach in a prototypical tool DeepSRGR. Experimental results show that a large amount of regions can be identified as spurious, and as a result, the precision of DeepPoly can be significantly improved. As a side contribution, we show that our approach can be applied to verify quantitative robustness properties.



Author(s):  
David Shriver ◽  
Sebastian Elbaum ◽  
Matthew B. Dwyer

AbstractDespite the large number of sophisticated deep neural network (DNN) verification algorithms, DNN verifier developers, users, and researchers still face several challenges. First, verifier developers must contend with the rapidly changing DNN field to support new DNN operations and property types. Second, verifier users have the burden of selecting a verifier input format to specify their problem. Due to the many input formats, this decision can greatly restrict the verifiers that a user may run. Finally, researchers face difficulties in re-using benchmarks to evaluate and compare verifiers, due to the large number of input formats required to run different verifiers. Existing benchmarks are rarely in formats supported by verifiers other than the one for which the benchmark was introduced. In this work we present DNNV, a framework for reducing the burden on DNN verifier researchers, developers, and users. DNNV standardizes input and output formats, includes a simple yet expressive DSL for specifying DNN properties, and provides powerful simplification and reduction operations to facilitate the application, development, and comparison of DNN verifiers. We show how DNNV increases the support of verifiers for existing benchmarks from 30% to 74%.



Sign in / Sign up

Export Citation Format

Share Document