scholarly journals Framework for Authentication 802.1X Security Protocol of WNAS as RFC Access Management Device Associated with RFC Authentication Management Technique

2021 ◽  
Author(s):  
Fathima T ◽  
Vennila S M

IEEE 802 is used in LAN networks that expose or provide sensitive data to complex applications or services. These are protocols for accessing, managing and controlling access to network-based services and applications in general. Port-controlled network access controls network access and prevents the transmission and reception of nameless or unauthorized persons, leading to network interruption, service theft and data loss. This paper introduces a new approach to investigate whether a data packets in wired networks transferred to a management device is authenticated packet. The data packets are sent to the SDN from RAR and share the information associated with each packet with a limited rate for the access management and are received by the RFC. Here it detects whether the data packet arrived is accepted or restricted. The speed at the authentication start packet is restricted to manage the number of terminals that enter later authentication, and it avoids avalanche impact of wireless authentication which may cause faults to lots of terminals which enter later authentication at the same time.

2021 ◽  
Vol 11 (5) ◽  
pp. 529-535
Author(s):  
Jihane El Mokhtari ◽  
Anas Abou El Kalam ◽  
Siham Benhaddou ◽  
Jean-Philippe Leroy

This article is devoted to the topic of coupling access and inference controls into security policies. The coupling of these two mechanisms is necessary to strengthen the protection of the privacy of complex systems users. Although the PrivOrBAC access control model covers several privacy protection requirements, the risk of inferring sensitive data may exist. Indeed, the accumulation of several pieces of data to which access is authorized can create an inference. This work proposes an inference control mechanism implemented through multidimensional analysis. This analysis will take into account several elements such as the history of access to the data that may create an inference, as well as their influence on the inference. The idea is that this mechanism delivers metrics that reflect the level of risk. These measures will be considered in the access control rules and will participate in the refusal or authorization decision with or without obligation. This is how the coupling of access and inference controls will be applied. The implementation of this coupling will be done via the multidimensional OLAP databases which will be requested by the Policy Information Point, the gateway brick of XACML to the various external data sources, which will route the inference measurements to the decision-making point.


Author(s):  
C. Warren Axelrod

Managing digital identities and computer and network access rights is difficult at the best of times. But today’s rapidly changing organizational structures and technology dependencies make for even greater challenges. In this chapter, we review the various stages in the identity and access management (IAM) lifecycle from the particular perspective of organizations undergoing substantial change from mergers and acquisitions, business expansions and contractions, as well as internal structural and technological changes. We also look at the impact on IAM of incidents originating from outside organizations, such as natural disasters (earthquakes, hurricanes, volcanic eruptions, etc.) and manmade catastrophes (terrorist bombings, major oil spills, etc.). We address the question of how one might prepare for and respond to such events by managing and controlling identification and authorization in fast-moving, difficult-to-control situations.


Author(s):  
Giorgos Kostopoulos ◽  
Nicolas Sklavos ◽  
Odysseas Koufopavlou

Wireless communications are becoming ubiquitous in homes, offices, and enterprises with the popular IEEE 802.11 wireless local area network (LAN) technology and the up-and-coming IEEE 802.16 wireless metropolitan area networks (MAN) technology. The wireless nature of communications defined in these standards makes it possible for an attacker to snoop on confidential communications or modify them to gain access to home or enterprise networks much more easily than with wired networks. Wireless devices generally try to reduce computation overhead to conserve power and communication overhead to conserve spectrum and battery power. Due to these considerations, the original security designs in wireless LANs and MANs used smaller keys, weak message integrity protocols, weak or one-way authentication protocols, and so forth. As wireless networks became popular, the security threats were also highlighted to caution users. A security protocol redesign followed first in wireless LANs and then in wireless MANs. This chapter discusses the security threats and requirements in wireless LANs and wireless MANs, with a discussion on what the original designs missed and how they were corrected in the new protocols. It highlights the features of the current wireless LAN and MAN security protocols and explains the caveats and discusses open issues. Our aim is to provide the reader with a single source of information on security threats and requirements, authentication technologies, security encapsulation, and key management protocols relevant to wireless LANs and MANs.


2012 ◽  
Vol 3 (1) ◽  
pp. 55-73 ◽  
Author(s):  
Ismail Ali ◽  
Sandro Moiron ◽  
Martin Fleury ◽  
Mohammed Ghanbari

This paper examines the impact of data partitioning form on wireless network access control and proposes a selective dropping scheme based on dropping the partition carrying intra-coded macroblocks. Data partitioning is an error resiliency technique that allows unequal error protection for transmission over ‘lossy’ channels. Including a per-picture, cyclic intra-refresh macroblock line guards against temporal error propagation. The authors show that when congestion occurs, it is possible to gain up to 2 dB in video quality over assigning a stream to a single IEEE 802.11e access category. The scheme is consistently advantageous in indoor and outdoor wireless scenarios over other ways of assigning the partitioned data packets to different access categories. This counter-intuitive scheme for access control purposes reverses the priority usually given to partition-B data packets over that of partition-C.


Author(s):  
Radu-Dinel Miruta ◽  
Cosmin Stanuica ◽  
Eugen Borcoci

The content aware (CA) packet classification and processing at network level is a new approach leading to significant increase of delivery quality of the multimedia traffic in Internet. This paper presents a solution for a new multi-dimensional packet classifier of an edge router, based on content - related new fields embedded in the data packets. The technique is applicable to content aware networks. The classification algorithm is using three new packet fields named Virtual Content Aware Network (VCAN), Service Type (STYPE), and U (unicast/multicast) which are part of the Content Awareness Transport Information (CATI) header. A CATI header is inserted into the transmitted data packets at the Service/Content Provider server side, in accordance with the media service definition, and enables the content awareness features at a new overlay Content Aware Network layer. The functionality of the CATI header within the classification process is then analyzed. Two possibilities are considered: the adaptation of the Lucent Bit vector algorithm and, respectively, of the tuple space search, in order to respond to the suggested multi-fields classifier. The results are very promising and they prove that theoretical model of inserting new packet fields for content aware classification can be implemented and can work in a real time classifier.


2011 ◽  
Vol 33 (1) ◽  
pp. 24-34 ◽  
Author(s):  
William M. Fitzgerald ◽  
Simon N. Foley

2016 ◽  
Vol 25 (07) ◽  
pp. 1650067 ◽  
Author(s):  
Álvaro Díaz ◽  
Javier González-Bayon ◽  
Pablo Sánchez

Sensor nodes are low-power and low-cost devices with the requirement of a long autonomous lifetime. Therefore, the nodes have to use the available power carefully and avoid expensive computations or radio transmissions. In addition, as some wireless sensor networks (WSNs) process sensitive data, selecting a security protocol is vital. Cryptographic methods used in WSNs should fulfill the constraints of sensor nodes and should be evaluated for their security and power consumption. WSN engineers use several metrics to obtain estimations prior to network deployment. These metrics are usually related to power and execution time estimation. However, security is a feature that cannot be estimated and it is either “active” or “inactive”, with no possibility of introducing intermediate security levels. This lack of flexibility is a disadvantage in real deployments where different operation modes with different security and power specifications are often needed. This paper proposes including a new security estimation metric in a previously proposed framework for WSN simulation and embedded software (SW) performance analysis. This metric is called Security Estimation Metric (SEM) and it provides information about the security encryption used in WSN transmissions. Results show that the metric improves flexibility, granularity and execution time compared to other cryptographic tests.


2016 ◽  
Vol 67 (1) ◽  
pp. 191-203
Author(s):  
Markus Stefan Wamser ◽  
Stefan Rass ◽  
Peter Schartner

Abstract Evaluating arbitrary functions on encrypted data is one of the holy grails of cryptography, with Fully Homomorphic Encryption (FHE) being probably the most prominent and powerful example. FHE, in its current state is, however, not efficient enough for practical applications. On the other hand, simple homomorphic and somewhat homomorphic approaches are not powerful enough to support arbitrary computations. We propose a new approach towards a practicable system for evaluating functions on encrypted data. Our approach allows to chain an arbitrary number of computations, which makes it more powerful than existing efficient schemes. As with basic FHE we do not encrypt or in any way hide the function, that is evaluated on the encrypted data. It is, however, sufficient that the function description is known only to the evaluator. This situation arises in practice for software as a Software as a Service (SaaS)-scenarios, where an evaluator provides a function only known to him and the user wants to protect his data. Another application might be the analysis of sensitive data, such as medical records. In this paper we restrict ourselves to functions with only one input parameter, which allow arbitrary transformations on encrypted data.


Author(s):  
Akash Kumar Bhoi ◽  
Baidyanath Panda

One of the most important and challenging goal of current and future communication network is transmission of high quality images from sender to receiver side quickly with least error where limitation of bandwidth is a prime problem. Here we will discuss a new approach towards compressing and decompressing with perfect accuracy for its suitable transmission and reception. This technology is also helpful in Server and Client models used in industries where a large number of clients work over a single Server. Hence to minimize the load during transmission of a volumetric image/video this process can be implemented.


Sign in / Sign up

Export Citation Format

Share Document