verification algorithm
Recently Published Documents


TOTAL DOCUMENTS

198
(FIVE YEARS 63)

H-INDEX

12
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-16
Author(s):  
Ping Li ◽  
Songtao Guo ◽  
Jiahui Wu ◽  
Quanjun Zhao

Compared with the classical structure with only one controller in software-defined networking (SDN), multi-controller topology structure in SDN provides a new type of cross-domain forwarding network architecture with multiple centralized controllers and distributed forwarding devices. However, when the network includes multiple domains, lack of trust among the controllers remains a challenge how to verify the correctness of cross-domain forwarding behaviors in different domains. In this paper, we propose a novel secure multi-controller rule enforcement verification (BlockREV) mechanism in SDN to guarantee the correctness of cross-domain forwarding. We first adopt blockchain technology to provide the immutability and privacy protection for forwarding behaviors. Furthermore, we present an address-based aggregate signature scheme with appropriate cryptographic primitives, which is provably secure in the random oracle model. Moreover, we design a verification algorithm based on hash values of forwarding paths to check the consistency of forwarding order. Finally, experimental results demonstrate that the proposed BlockREV mechanism is effective and suitable for multi-controller scenarios in SDN.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lin Yang

In recent years, people have paid more and more attention to cloud data. However, because users do not have absolute control over the data stored on the cloud server, it is necessary for the cloud storage server to provide evidence that the data are completely saved to maintain their control over the data. Give users all management rights, users can independently install operating systems and applications and can choose self-service platforms and various remote management tools to manage and control the host according to personal habits. This paper mainly introduces the cloud data integrity verification algorithm of sustainable computing accounting informatization and studies the advantages and disadvantages of the existing data integrity proof mechanism and the new requirements under the cloud storage environment. In this paper, an LBT-based big data integrity proof mechanism is proposed, which introduces a multibranch path tree as the data structure used in the data integrity proof mechanism and proposes a multibranch path structure with rank and data integrity detection algorithm. In this paper, the proposed data integrity verification algorithm and two other integrity verification algorithms are used for simulation experiments. The experimental results show that the proposed scheme is about 10% better than scheme 1 and about 5% better than scheme 2 in computing time of 500 data blocks; in the change of operation data block time, the execution time of scheme 1 and scheme 2 increases with the increase of data blocks. The execution time of the proposed scheme remains unchanged, and the computational cost of the proposed scheme is also better than that of scheme 1 and scheme 2. The scheme in this paper not only can verify the integrity of cloud storage data but also has certain verification advantages, which has a certain significance in the application of big data integrity verification.


2021 ◽  
Vol 13 (6) ◽  
pp. 37-53
Author(s):  
Andrew R. Short ◽  
Τheofanis G. Orfanoudakis ◽  
Helen C. Leligou

The ever-increasing use of Artificial Intelligence applications has made apparent that the quality of the training datasets affects the performance of the models. To this end, Federated Learning aims to engage multiple entities to contribute to the learning process with locally maintained data, without requiring them to share the actual datasets. Since the parameter server does not have access to the actual training datasets, it becomes challenging to offer rewards to users by directly inspecting the dataset quality. Instead, this paper focuses on ways to strengthen user engagement by offering “fair” rewards, proportional to the model improvement (in terms of accuracy) they offer. Furthermore, to enable objective judgment of the quality of contribution, we devise a point system to record user performance assisted by blockchain technologies. More precisely, we have developed a verification algorithm that evaluates the performance of users’ contributions by comparing the resulting accuracy of the global model against a verification dataset and we demonstrate how this metric can be used to offer security improvements in a Federated Learning process. Further on, we implement the solution in a simulation environment in order to assess the feasibility and collect baseline results using datasets of varying quality.


Author(s):  
E. Toropov ◽  
◽  
L. Lymbina ◽  

The normative method (NM) of boilers thermal calculation, repeatedly confirmed and refined, contains the structure of ideas and methods that were retained and adapted during the transition to digital technologies. As applied to the analysis of the heat balance of a boiler with flare furnaces, this required the transformation of a large array of initial and reference data, which cannot be applied unchanged when using a computer. This applies to graphical and tabular data, which form up to 80 % of the volume of NM. To obtain the correlation dependences, the authors use a simple and reliable method of unknown coefficients with the inclusion of a verification algorithm, in the case of equidistant arguments these are the Gregory-Newton coefficients. As shown by a preliminary analysis, for almost all dependencies a polynomial of the second degreesometimes replaced by two polynomials is sufficient. By varying the determining factors in the range of nominal values ±20 %, the model response was obtained in the form of a change in fuel consumption. Quantitatively, all material corre-sponds to the normative data, is presented in digital format and methodically corresponds to the Mathcad-15 package. In contrast to the well-known works in this area, all factors affecting the heat balance are represented by approximations taking into account the variability of temperature and pressure.


2021 ◽  
Vol 4 (9(112)) ◽  
pp. 32-45
Author(s):  
Orken Mamyrbayev ◽  
Aizat Kydyrbekova ◽  
Keylan Alimhan ◽  
Dina Oralbekova ◽  
Bagashar Zhumazhanov ◽  
...  

The widespread use of biometric systems entails increased interest from cybercriminals aimed at developing attacks to crack them. Thus, the development of biometric identification systems must be carried out taking into account protection against these attacks. The development of new methods and algorithms for identification based on the presentation of randomly generated key features from the biometric base of user standards will help to minimize the disadvantages of the above methods of biometric identification of users. We present an implementation of a security system based on voice identification as an access control key and a verification algorithm developed using MATLAB function blocks that can authenticate a person's identity by his or her voice. Our research has shown an accuracy of 90 % for this user identification system for individual voice characteristics. It has been experimentally proven that traditional MFCCs using DNN and i and x-vector classifiers can achieve good results. The paper considers and analyzes the most well-known approaches from the literature to the problem of user identification by voice: dynamic programming methods, vector quantization, mixtures of Gaussian processes, hidden Markov model. The developed software package for biometric identification of users by voice and the method of forming the user's voice standards implemented in the complex allows reducing the number of errors in identifying users of information systems by voice by an average of 1.5 times. Our proposed system better defines voice recognition in terms of accuracy, security and complexity. The application of the results obtained will improve the security of the identification process in information systems from various attacks.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254219
Author(s):  
Pascal Hunold ◽  
Thomas Berg ◽  
Daniel Seehofer ◽  
Robert Sucher ◽  
Adam Herber ◽  
...  

Background The model of end-stage liver disease (MELD) score was established for the allocation of liver transplants. The score is based on the medical laboratory parameters: bilirubin, creatinine and the international normalized ratio (INR). A verification algorithm for the laboratory MELD diagnostic was established, and the results from the first six years were analyzed. Methods We systematically investigated the validity of 7,270 MELD scores during a six-year period. The MELD score was electronically requested by the clinical physician using the laboratory system and calculated and specifically validated by the laboratory physician in the context of previous and additional diagnostics. Results In 2.7% (193 of 7,270) of the cases, MELD diagnostics did not fulfill the specified quality criteria. After consultation with the sender, 2.0% (145) of the MELD scores remained invalid for different reasons and could not be reported to the transplant organization. No cases of deliberate misreporting were identified. In 34 cases the dialysis status had to be corrected and there were 24 cases of oral anticoagulation with impact on MELD diagnostics. Conclusion Our verification algorithm for MELD diagnostics effectively prevented invalid MELD results and could be adopted by transplant centers to prevent diagnostic errors with possible adverse effects on organ allocation.


Author(s):  
Martin Blicha ◽  
Antti E. J. Hyvärinen ◽  
Jan Kofroň ◽  
Natasha Sharygina

AbstractThe use of propositional logic and systems of linear inequalities over reals is a common means to model software for formal verification. Craig interpolants constitute a central building block in this setting for over-approximating reachable states, e.g. as candidates for inductive loop invariants. Interpolants for a linear system can be efficiently computed from a Simplex refutation by applying the Farkas’ lemma. However, these interpolants do not always suit the verification task—in the worst case, they can even prevent the verification algorithm from converging. This work introduces the decomposed interpolants, a fundamental extension of the Farkas interpolants, obtained by identifying and separating independent components from the interpolant structure, using methods from linear algebra. We also present an efficient polynomial algorithm to compute decomposed interpolants and analyse its properties. We experimentally show that the use of decomposed interpolants in model checking results in immediate convergence on instances where state-of-the-art approaches diverge. Moreover, since being based on the efficient Simplex method, the approach is very competitive in general.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1337
Author(s):  
Kuo-Kun Tseng ◽  
He Chen ◽  
Charles Chen ◽  
Charinrat Bansong

There is a long history of using handwritten signatures to verify or authenticate a “signer” of the signed document. With the development of Internet technology, many tasks can be accomplished through the document management system, such as the applications of digital contracts or important documents, and more secure signature verification is demanded. Thus, the live handwriting signatures are attracting more interest for biological human identification. In this paper, we propose a handwriting signature verification algorithm by using four live waveform elements as the verification features. A new Aho–Corasick Histogram mechanism is proposed to perform this live signature verification. The benefit of the ACH algorithm is mainly its ability to convert time-series waveforms into time-series short patterns and then perform a statistical counting on the AC machine to measure the similarity. Since AC is a linearly time complexity algorithm, our ACH method can own a deterministic processing time. According to our experiment result, the proposed algorithm has satisfying performance in terms of speed and accuracy with an average of 91% accuracy.


Sign in / Sign up

Export Citation Format

Share Document