scholarly journals Continuous Verification of Network Security Compliance

Author(s):  
Claas Lorenz ◽  
Vera Clemens ◽  
Max Schrötter ◽  
Bettina Schnor

Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification.<br>In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.

2021 ◽  
Author(s):  
Claas Lorenz ◽  
Vera Clemens ◽  
Max Schrötter ◽  
Bettina Schnor

Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification.<br>In this work, we address these challenges and present a solution which is based on the application of formal methods. First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.


2021 ◽  
Author(s):  
Claas Lorenz ◽  
Vera Clemens ◽  
Max Schrötter ◽  
Bettina Schnor

Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification.<br>In this work, we address these challenges and present a solution which is based on the application of formal methods. First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Junchang Wang ◽  
Shaojin Cheng ◽  
Xiong Fu

High-level programming is one of the critical building blocks of the effective use of software-defined networking (SDN). Existing solutions, however, either (1) cannot utilize the state-of-the-art switches with flow table pipelining, a key technique to prevent flow rule set explosion or (2) force programmers to manually organize and manage hardware flow table pipelines, which is time-consuming and error-prone. This paper presents a high-level SDN programming framework to address these issues. The framework can automatically (1) generate rule sets for heterogeneous switches with different flow table pipelining designs and (2) update installed rules when the network state changes. As a result, the framework can not only generate efficient rule sets for switches but also provide programmers a centralized, intuitive, and hence easy-to-use programming API. Experiments show that the framework can generate compact rule sets that are 29–116 times smaller than those generated by other open-source SDN controllers. Besides, the framework is 5 times faster to recover from network link failures in comparison to other controllers.


Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


2015 ◽  
Vol 35 (36) ◽  
pp. 12412-12424 ◽  
Author(s):  
A. Stigliani ◽  
K. S. Weiner ◽  
K. Grill-Spector

2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


2021 ◽  
Vol 30 (6) ◽  
pp. 526-534
Author(s):  
Evelina Fedorenko ◽  
Cory Shain

Understanding language requires applying cognitive operations (e.g., memory retrieval, prediction, structure building) that are relevant across many cognitive domains to specialized knowledge structures (e.g., a particular language’s lexicon and syntax). Are these computations carried out by domain-general circuits or by circuits that store domain-specific representations? Recent work has characterized the roles in language comprehension of the language network, which is selective for high-level language processing, and the multiple-demand (MD) network, which has been implicated in executive functions and linked to fluid intelligence and thus is a prime candidate for implementing computations that support information processing across domains. The language network responds robustly to diverse aspects of comprehension, but the MD network shows no sensitivity to linguistic variables. We therefore argue that the MD network does not play a core role in language comprehension and that past findings suggesting the contrary are likely due to methodological artifacts. Although future studies may reveal some aspects of language comprehension that require the MD network, evidence to date suggests that those will not be related to core linguistic processes such as lexical access or composition. The finding that the circuits that store linguistic knowledge carry out computations on those representations aligns with general arguments against the separation of memory and computation in the mind and brain.


Author(s):  
Maja Radović ◽  
Nenad Petrović ◽  
Milorad Tošić

The requirements of state-of-the-art curricula and teaching processes in medical education have brought both new and improved the existing assessment methods. Recently, several promising methods have emerged, among them the Comprehensive Integrative Puzzle (CIP), which shows great potential. However, the construction of such questions requires high efforts of a team of experts and is time-consuming. Furthermore, despite the fact that English language is accepted as an international language, for educational purposes there is also a need for representing data and knowledge in native language. In this paper, we present an approach for automatic generation of CIP assessment questions based on using ontologies for knowledge representation. In this way, it is possible to provide multilingual support in the teaching and learning process because the same ontological concept can be applied to corresponding language expressions in different languages. The proposed approach shows promising results indicated by dramatic speeding up of construction of CIP questions compared to manual methods. The presented results represent a strong indication that adoption of ontologies for knowledge representation may enable scalability in multilingual domain-specific education regardless of the language used. High level of automation in the assessment process proven on the CIP method in medical education as one of the most challenging domains, promises high potential for new innovative teaching methodologies in other educational domains as well.


2021 ◽  
Author(s):  
Nicholas M Blauch ◽  
Marlene Behrmann ◽  
David Plaut

Inferotemporal cortex (IT) in humans and other primates is topographically organized, with multiple domain-selective areas and other general patterns of functional organization. What factors underlie this organization, and what can this neural arrangement tell us about the mechanisms of high level vision? Here, we present an account of topographic organization involving a computational model with two components: 1) a feature-extracting encoder model of early visual processes, followed by 2) a model of high-level hierarchical visual processing in IT subject to specific biological constraints. In particular, minimizing the wiring cost on spatially organized feedforward and lateral connections within IT, combined with constraining the feedforward processing to be strictly excitatory, results in a hierarchical, topographic organization. This organization replicates a number of key properties of primate IT cortex, including the presence of domain-selective spatial clusters preferentially involved in the representation of faces, objects, and scenes, within-domain topographic organization such as animacy and indoor/outdoor distinctions, and generic spatial organization whereby the response correlation of pairs of units falls off with their distance. The model supports a view in which both domain-specific and domain-general topographic organization arise in the visual system from an optimization process that maximizes behavioral performance while minimizing wiring costs.


10.29007/prxp ◽  
2018 ◽  
Author(s):  
Jan Olaf Blech ◽  
Thanh-Hung Nguyen ◽  
Michael Perin

In this paper we present on-going work addressing the problem of automatically generating realistic and guaranteed correct invariants. Since invariant generation mechanisms are error-prone, after the computation of invariants by a verification tool, we formally prove that the generated invariants are indeed invariants of the considered systems using a higher-order theorem prover and automated techniques. We regard invariants for BIP models. BIP (behavior, interaction, priority) is a language for specifying asynchronous component based systems. Proving that an invariant holds often requires an induction on possible system execution traces. For this reason, apart from generating invariants that precisely capture a system’s behavior, inductiveness of invariants is an important goal. We establish a notion of robust BIP models. These can be automatically constructed from our original non-robust BIP models and over-approximate their behavior. We motivate that invariants of robust BIP models capture the behavior of systems in a more natural way than invariants of corresponding non-robust BIP models. Robust BIP models take imprecision due to values delivered by sensors into account. Invariants of robust BIP models tend to be inductive and are also invariants of the original non-robust BIP model. Therefore they may be used by our verification tools and it is easy to show their correctness in a higher-order theorem prover. The presented work is developed to verify the results of a deadlock-checking tool for embedded systems after their computations. Therewith, we gain confidence in the provided analysis results.


Sign in / Sign up

Export Citation Format

Share Document