scholarly journals An Analytic Model for Reducing Authentication Signaling Traffic in an End-to-End Authentication Scheme

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4980
Author(s):  
Shadi Nashwan ◽  
Imad I. H. Nashwan

In an end-to-end authentication (E2EA) scheme, the physician, patient, and sensor nodes authenticate each other through the healthcare service provider in three phases: the long-term authentication phase (LAP), short-term authentication phase (SAP), and sensor authentication phase (WAP). Once the LAP is executed between all communication nodes, the SAP is executed (m) times between the physician and patient by deriving a new key from the PSij key generated by healthcare service provider during the LAP. In addition, the WAP is executed between the connected sensor and patient (m + 1) times without going back to the service provider. Thus, it is critical to determine an appropriate (m) value to maintain a specific security level and to minimize the cost of E2EA. Therefore, we proposed an analytic model in which the authentication signaling traffic is represented by a Poisson process to derive an authentication signaling traffic cost function for the (m) value. wherein the residence time of authentication has three distributions: gamma, hypo-exponential, and exponential. Finally, using the numerical analysis of the derived cost function, an optimal value (m) that minimizes the authentication signaling traffic cost of the E2EA scheme was determined.

2013 ◽  
Vol 676 ◽  
pp. 235-241
Author(s):  
Ping Sun ◽  
Xiu Min Yu ◽  
Wei Dong

The equivalent consumption minimization strategy (ECMS) is a method to reduce the global minimization problem to an instantaneous minimization problem to be solved at each instant. The adaptive ECMS is a development of ECMS in which the equivalence factors are not pre-coded, but rather calculated online. The equivalence factors, their optimal value, which minimizes the cost function while maintaining the vehicle substantially charge sustaining, depends on the specific driving cycle. The method proposed in this paper is one of the most important simplifications for actual real time implementation of A-ECMS and power delivering in energy management for HEV. The charging factor can be calculated if the discharging factor is calculated in the experiment for real time. And only a subset of (charging and discharging factors) generates a trend close to zero which indicates charge-sustainability.


2015 ◽  
Vol 5 (3) ◽  
pp. 153-160
Author(s):  
Афоничев ◽  
Dmitriy Afonichev

In this paper it is proved that the optimal value of the offset branches to the deep part of the forest resources should be supported by at least the cost of hauling by the timber moustache and highways, less the cost of construction, maintenance, repair, elimination of deep plot line. Composed of the target cost function, the derivative was equal to zero. As the result of algebraic operations an analytical function to determine the optimal offset branches to the deep part of the forest base. This dependence shows that the offset of the branches in the deep part of the forest base depends on the amount of wood removal from areas of the gravity of these branches.


2016 ◽  
Vol 1 (2) ◽  
pp. 1-7
Author(s):  
Karamjeet Kaur ◽  
Gianetan Singh Sekhon

Underwater sensor networks are envisioned to enable a broad category of underwater applications such as pollution tracking, offshore exploration, and oil spilling. Such applications require precise location information as otherwise the sensed data might be meaningless. On the other hand, security critical issue as underwater sensor networks are typically deployed in harsh environments. Localization is one of the latest research subjects in UWSNs since many useful applying UWSNs, e.g., event detecting. Now day’s large number of localization methods arrived for UWSNs. However, few of them take place stability or security criteria. In purposed work taking up localization in underwater such that various wireless sensor nodes get localize to each other. RSS based localization technique used remove malicious nodes from the communication intermediate node list based on RSS threshold value. Purposed algorithm improves more throughput and less end to end delay without degrading energy dissipation at each node. The simulation is conducted in MATLAB and it suggests optimal result as comparison of end to end delay with and without malicious node.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


Sign in / Sign up

Export Citation Format

Share Document