scholarly journals A New Approach for Detecting and Resolving Anomalies in Security Policy of the External Firewall Module of the Floodlight SDN Controller

2018 ◽  
Vol 25 (3) ◽  
pp. 251-256
Author(s):  
Sergey V. Morzhov ◽  
Mikhail A. Nikitinskiy

In this paper, the authors analyze the developed PreFirewall network application for the Floodlight software defined network (SDN) controller. This application filters rules, which are added into the firewall module of the Floodlight SDN controller in order to prevent the occurrence of anomalies among them. The rule filtering method is based on determining whether the addition of a new rule will not cause any anomalies with already added ones. If an anomaly was detected while adding the new rule, PreFirewall application should be able to resolve it and must report the detection of the anomaly. The developed network application PreFirewall passed a number of tests. As a result of the stress testing, it was found that the time of adding a new rule, when using PreFirewall, substantially increases with increase in the number of previously processed rules. Analysis of the network application PreFirewall showed that while adding a rule (the most frequent operation), in the worst case it is necessary to compare it with all existing rules, which are stored as a two-dimensional array. Thus, the operation of adding a new rule is the most time-consuming and has the greatest impact on the performance of the network application, which leads to an increase in response time. A possible way to of solving this problem is to select a data structure used to store the rules, in which the operation of adding a new rule would be simple. After analyzing the structure of the policy rules for the Floodlight SDN controller, the authors noted that a tree is the most adequate data structure for its storage. It provides optimization of memory used for storing the rules and, more important, it allows to achieve the constant complexity of the operation of adding a new rule and, consequently, solving the performance problem of the network application PreFirewall. The article is published in the authors’ wording.

Author(s):  
M. John Plodinec

Abstract Over the last decade, communities have become increasingly aware of the risks they face. They are threatened by natural disasters, which may be exacerbated by climate change and the movement of land masses. Growing globalization has made a pandemic due to the rapid spread of highly infectious diseases ever more likely. Societal discord breeds its own threats, not the least of which is the spread of radical ideologies giving rise to terrorism. The accelerating rate of technological change has bred its own social and economic risks. This widening spectrum of risk poses a difficult question to every community – how resilient will the community be to the extreme events it faces. In this paper, we present a new approach to answering that question. It is based on the stress testing of financial institutions required by regulators in the United States and elsewhere. It generalizes stress testing by expanding the concept of “capital” beyond finance to include the other “capitals” (e.g., human, social) possessed by a community. Through use of this approach, communities can determine which investments of its capitals are most likely to improve its resilience. We provide an example of using the approach, and discuss its potential benefits.


Leonardo ◽  
2008 ◽  
Vol 41 (4) ◽  
pp. 418-419 ◽  
Author(s):  
Caitlin Jones ◽  
Lizzie Muller

This paper describes a new approach to documenting media art which seeks to place in dialogue the artist's intentions and the audience's experience. It explicitly highlights the productive tension between the ideal, conceptual existence of the work, and its actual manifestation through different iterations and exhibitions in the real world. The paper describes how the approach was developed collaboratively during the production of a documentary collection for the artwork Giver of Names, by David Rokeby. It outlines the key features of the approach including artist's interview, audience interviews and data structure.


1996 ◽  
Vol 3 (37) ◽  
Author(s):  
Gerth Stølting Brodal ◽  
Chris Okasaki

Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worst-case time, and deleteMin in O(log n) worst-case time. These bounds are asymptotically optimal among all comparison-based priority queues. In this paper, we adapt<br />Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain other<br />priority queues. Each of these steps is expressed using ML-style functors. The last transformation, known as data-structural bootstrapping, is an interesting<br />application of higher-order functors and recursive structures.


2021 ◽  
Vol 2021 (3-4) ◽  
pp. 25-30
Author(s):  
Kirill Tkachenko

The article proposes a new approach for adjusting the parameters of computing nodes being a part of a data processing system based on analytical simulation of a queuing system with subsequent estimation of probabilities of hypotheses regarding the computing node state. Methods of analytical modeling of queuing systems and mathematical statistics are used. The result of the study is a mathematical model for assessing the information situation for a computing node, which differs from the previously published system model used. Estimation of conditional probabilities of hypotheses concerning adequate data processing by a computing node allows making a decision on the need of adjusting the parameters of a computing node. This adjustment makes it possible to improve the efficiency of working with tasks on the computing node of the data processing system. The implementation of the proposed model for adjusting the parameters of the computer node of the data processing system increases both the efficiency of process applications on the node and, in general, the efficiency of its operation. The application of the approach to all computing nodes of the data processing system increases the dependability of the system as a whole.


2005 ◽  
Vol 128 (4) ◽  
pp. 874-883 ◽  
Author(s):  
Mian Li ◽  
Shapour Azarm ◽  
Art Boyars

We present a deterministic non-gradient based approach that uses robustness measures in multi-objective optimization problems where uncontrollable parameter variations cause variation in the objective and constraint values. The approach is applicable for cases that have discontinuous objective and constraint functions with respect to uncontrollable parameters, and can be used for objective or feasibility robust optimization, or both together. In our approach, the known parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.


2005 ◽  
Vol 98 (6) ◽  
pp. 2298-2303 ◽  
Author(s):  
Michele R. Norton ◽  
Richard P. Sloan ◽  
Emilia Bagiella

Fourier-based approaches to analysis of variability of R-R intervals or blood pressure typically compute power in a given frequency band (e.g., 0.01–0.07 Hz) by aggregating the power at each constituent frequency within that band. This paper describes a new approach to the analysis of these data. We propose to partition the blood pressure variability spectrum into more narrow components by computing power in 0.01-Hz-wide bands. Therefore, instead of a single measure of variability in a specific frequency interval, we obtain several measurements. The approach generates a more complex data structure that requires a careful account of the nested repeated measures. We briefly describe a statistical methodology based on generalized estimating equations that suitably handles this more complex data structure. To illustrate the methods, we consider systolic blood pressure data collected during psychological and orthostatic challenge. We compare the results with those obtained using the conventional methods to compute blood pressure variability, and we show that our approach yields more efficient results and more powerful statistical tests. We conclude that this approach may allow a more thorough analysis of cardiovascular parameters that are measured under different experimental conditions, such as blood pressure or heart rate variability.


Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 183
Author(s):  
Canh V. Pham ◽  
Dung K. T. Ha ◽  
Quang C. Vu ◽  
Anh N. Su ◽  
Huan X. Hoang

The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1−(1−1k)t-approximation solution with t is an outcome of algorithm and t≥1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1−(1−1k)t−ϵ-approximation solution with probability at least 1−δ with ϵ>0,δ∈(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others.


Sign in / Sign up

Export Citation Format

Share Document