scholarly journals Basic Model of Multicast Authentication Based On Batch Signature-MABS

Author(s):  
Hilda C.P ◽  
Mr. Liaqat Ali khan ◽  
M.Grace Vennice

Traditional multicast authentication schemes manage the different involvement of the receivers by letting the sender: Choose the block size, divide a multicast stream into blocks, connect each block with a signature, and spread the effect of the Signature across all the packets in the block. The relationship between packets tends to packet loss which is very common via internet and wireless communication. For which we are going to propose novel multicast authentication protocol called MABS (Multicast Authentication Based on Batch Signature) by including two specified schemes MABS-B and MABS-E. Where MABS-B reduces the packet loss by eliminating the relationship between packets ,and due to its efficient cryptographic primitive called batch signature it provides efficient latency, computation and communication overhead .Where MABS-E improve the Dos impact by combining the basic scheme with a packet filtering mechanism while preserving the perfect resilience to packet loss.

2014 ◽  
Vol 643 ◽  
pp. 124-129
Author(s):  
Jing Lian Huang ◽  
Zhuo Wang ◽  
Juan Li

Using the derivative of Boolean functions and the e-derivative defined by ourselves as research tools, we discuss the relationship among a variety of cryptographic properties of the weight symmetric H Boolean functions in the range of the weight with the existence of H Boolean functions. We also study algebraic immunity and correlation immunity of the weight symmetric H Boolean functions and the balanced H Boolean functions. We obtain that the weight symmetric H Boolean function should have the same algebraic immunity, correlation immunity, propagation degree and nonlinearity. Besides, we determine that there exist several kinds of H Boolean functions with resilient, algebraic immunity and optimal algebraic immunity. The above results not only provide a theoretical basis for reducing nearly half of workload when studying the cryptographic properties of H Boolean function, but also provide a new research method for the study of secure cryptographic property of Boolean functions. Such researches are important in cryptographic primitive designs.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2804 ◽  
Author(s):  
Han ◽  
Tian ◽  
Shi ◽  
Huang ◽  
Li

. In recent years, the industrial use of the internet of things (IoT) has been constantly growing and is now widespread. Wireless sensor networks (WSNs) are a fundamental technology that has enabled such prevalent adoption of IoT in industry. WSNs can connect IoT sensors and monitor the working conditions of such sensors and of the overall environment, as well as detect unexpected system events in a timely and accurate manner. Monitoring large amounts of unstructured data generated by IoT devices and collected by the big-data analytics systems is a challenging task. Furthermore, detecting anomalies within the vast amount of data collected in real time by a centralized monitoring system is an even bigger challenge. In the context of the industrial use of the IoT, solutions for monitoring anomalies in distributed data flow need to be explored. In this paper, a low-power distributed data flow anomaly-monitoring model (LP-DDAM) is proposed to mitigate the communication overhead problem. As the data flow monitoring system is only interested in anomalies, which are rare, and the relationship among objects in terms of the size of their attribute values remains stable within any specific period of time, LP-DDAM integrates multiple objects as a complete set for processing, makes full use of the relationship among the objects, selects only one “representative” object for continuous monitoring, establishes certain constraints to ensure correctness, and reduces communication overheads by maintaining the overheads of constraints in exchange for a reduction in the number of monitored objects. Experiments on real data sets show that LP-DDAM can reduce communication overheads by approximately 70% when compared to an equivalent method that continuously monitors all objects under the same conditions.


Author(s):  
RICHARD BACHE ◽  
FABIO CRESTANI

The relationship between distance travelled to an offence and frequency of offending has traditionally been expressed as a (downward-sloping) decay function and such a curve is typically used to fit empirical data. It is proposed here that a decay function should be viewed as a probability density function. It is then possible to construct generative models to assign probabilities to suspects from a set of known offenders whose past crimes are stored in a police data archive. Probabilities can then be used to prioritise suspects in an investigation and calculate the probability of being the culprit. Two functional forms of the decay function are considered: negative exponential and power. These are shown empirically to outperform a basic model which simply ranks suspects by distance from the crime. The model is then extended to include also preferred direction of travel which varies between offenders. If direction of travel is incorporated then predictions become more accurate. The generative decay model has two advantages over a basic model. Firstly it can incorporate other information such as past frequency of offending. Secondly, it provides an estimate of suspect likelihood indicating the trustworthiness of any inference by the model.


1991 ◽  
Vol 48 (8) ◽  
pp. 1431-1436 ◽  
Author(s):  
Victor R. Restrepo ◽  
Reg A. Watson

We present an approach to the analysis of crustacean egg production ogives with emphasis on detecting seasonal trends. The relationship between the proportion of gravid females (by size) and season is a prerequisite to the estimation of egg production potentials of populations. The basic method consists of relating, for each sample, the proportion of berried females with their size through a three-parameter logistic function where the asymptote may be less than 1. We then provide guidance for detecting seasonal trends in the estimates of the parameters for the individual samples. This is accomplished by restricting the basic model such that some parameters are considered to be either fixed for all samples or as simple functions of time or environmental variables such as temperature. Parameter estimates are obtained via maximum likelihood methods, and comparisons between alternative models are presented graphically and using likelihood ratio tests. We illustrate the approach and its application with data for a tropical shrimp, Penaeus esculentus, from northern Australia.


Author(s):  
Wonseok Choi ◽  
Akiko Inoue ◽  
Byeonghak Lee ◽  
Jooyoung Lee ◽  
Eik List ◽  
...  

Tweakable block ciphers (TBCs) have proven highly useful to boost the security guarantees of authentication schemes. In 2017, Cogliati et al. proposed two MACs combining TBC and universal hash functions: a nonce-based MAC called NaT and a deterministic MAC called HaT. While both constructions provide high security, their properties are complementary: NaT is almost fully secure when nonces are respected (i.e., n-bit security, where n is the block size of the TBC, and no security degradation in terms of the number of MAC queries when nonces are unique), while its security degrades gracefully to the birthday bound (n/2 bits) when nonces are misused. HaT has n-bit security and can be used naturally as a nonce-based MAC when a message contains a nonce. However, it does not have full security even if nonces are unique.This work proposes two highly secure and efficient MACs to fill the gap: NaT2 and eHaT. Both provide (almost) full security if nonces are unique and more than n/2-bit security when nonces can repeat. Based on NaT and HaT, we aim at achieving these properties in a modular approach. Our first proposal, Nonce-as-Tweak2 (NaT2), is the sum of two NaT instances. Our second proposal, enhanced Hash-as-Tweak (eHaT), extends HaT by adding the output of an additional nonce-depending call to the TBC and prepending nonce to the message. Despite the conceptual simplicity, the security proofs are involved. For NaT2 in particular, we rely on the recent proof framework for Double-block Hash-then-Sum by Kim et al. from Eurocrypt 2020.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Songtao Yang ◽  
Qingfeng Jiang

With the interaction of geographic data and social data, the inference attack has been mounting up, calling for new technologies for privacy protection. Although there are many tangible contributions of spatial-temporal cloaking technologies, traditional technologies are not enough to resist privacy intrusion. Malicious attackers still steal user-sensitive information by analyzing the relationship between location and query semantics. Reacting to many interesting issues, oblivious transfer (OT) protocols are introduced to guarantee location privacy. To our knowledge, OT is a cryptographic primitive between two parties and can be used as a building block for any arbitrary multiparty computation protocol. Armed with previous privacy-preserving technologies, for example, OT, in this work, we first develop a novel region queries framework that can provide robust privacy for location-dependent queries. We then design an OT-assist privacy-aware protocol (or OTPA) for location-based service with rigorous security analysis. In short, the common query of the client in our solution can be divided into two parts, the region query R q and the content query C q , to achieve location k -anonymity, location m -diversity, and query r -diversity, which ensure the privacy of two parties (i.e., client and server). Lastly, we instantiate our OTPA protocol, and experiments show that the proposed OTPA protocol is reasonable and effective.


2021 ◽  
Vol 15 (1) ◽  
pp. 1-23
Author(s):  
Peng Li ◽  
Chao Xu ◽  
He Xu

In order to solve the problem that the privacy preserving algorithm based on slicing technology is incapable of dealing with packet loss, this paper presents the redundancy algorithm for privacy preserving. The algorithm guarantees privacy by combining disturbance data and ensures redundancy via carrying hidden data. It also selects the routing tree that is generated by the CTP protocol as the routing path for data transmission. Through division at the source node, the method adds hidden information and disturbance data. This algorithm uses hidden data and adds perturbation data to improve the privacy preserving. Nonetheless, it can restore the original data when data are partly lost. According to the simulation via TOSSIM (TinyOS simulator), in the case of partial packet loss, the algorithm can completely restore the original data. Furthermore, the authors compared accuracy of proposed algorithm, probability of data reduction, data fitting degree, communication overhead, and PLR. As a result, it improves the reliability and privacy of data transmission while ensuring data redundancy.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Leandro Miguel Lopez ◽  
Charmae Franchesca Mendoza ◽  
Jordi Casademont ◽  
Daniel Camps-Mur

Vehicular communications will foster mobility services and enable mass adoption of future autonomous vehicles, interchanging huge amount of data acquired from vehicles’ sensors. 3GPP Release 14 presents the first standard for supporting V2X in LTE. Several enhancements are introduced, including a new arrangement of the physical resource grid, where subchannels are the minimum resource unit instead of Resource Blocks. The resource grid is defined by several design parameters, some of them with constraints imposed by 3GPP specifications, that affect the maximum message transmission rate and efficiency of the system. Moreover, the optimum choice of these parameters is closely linked to message length, which is another variable parameter. This paper provides an analysis of the relationship between these design parameters (Resource Block per Subchannel, Transport Block Size Index, and coding rate), message size, and the system’s maximum capacity and efficiency. In doing so, we do not consider channel reuse or radio transmission characteristics because the focus of this paper is trying to find the resource grid design parameters that optimize system capacity, which is a very important aspect to consider by V2X operators.


2021 ◽  
Vol 75 (3) ◽  
pp. 158-166
Author(s):  
R.K. Uskenbayeva ◽  
◽  
А.К. Bolshibayeva ◽  
S.B. Rakhmetulayeva ◽  
◽  
...  

This article discusses the issues of organizing a general model from local ones. When organizing models into a team, not only the structure of the organization is important, but also the nature of the relationship between the models and the type of protocol (universal or unique), for example, the type of technologies for integrating data, information, knowledge, and rules based on: BUS, AHI, AII, and interfaces, such as a service or agent. But the organization of local models depends on the peculiarities of the local models. Therefore, before considering the organization of local models, we reveal the essence of local models. It is clear, that the properties of a business process of a single monolithic module cannot be fully displayed. Therefore, the concept of a basic model is proposed, which is integrated from the so-called local models. The work reveals the purpose and essence of the functions of local models (LM) and options for organizing the base model (GM) from local models.


2016 ◽  
Vol 8 (3) ◽  
pp. 74-77
Author(s):  
Сазонова ◽  
Svetlana Sazonova

In practice, there are three forms of boundary conditions. The feature of the fourth form of the boundary conditions is that the relationship between the above nodal parameters before solving the problem cannot be defined neither in form nor in content. This situation occurs when the decomposition of the hydraulic system. The basis of this mechanism is to transform the basic model estimated in the secondary zone model based on the method of functional reduction. This method is essentially a synthesis of the known methods of transformation models based on the ideas of reduction, diakoptic and cybernetic modeling.


Sign in / Sign up

Export Citation Format

Share Document