deletion algorithm
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 1)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0258464
Author(s):  
Lei Liu ◽  
Mingwei Cao ◽  
Yeguo Sun

E-documents are carriers of sensitive data, and their security in the open network environment has always been a common problem with the field of data security. Based on the use of encryption schemes to construct secure access control, this paper proposes a fusion data security protection scheme. This scheme realizes the safe storage of data and keys by designing a hybrid symmetric encryption algorithm, a data security deletion algorithm, and a key separation storage method. The scheme also uses file filter driver technology to design a user operation state monitoring method to realize real-time monitoring of user access behavior. In addition, this paper designs and implements a prototype system. Through the verification and analysis of its usability and security, it is proved that the solution can meet the data security protection requirements of sensitive E-documents in the open network environment.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1122
Author(s):  
Serafín Moral ◽  
Andrés Cano ◽  
Manuel Gómez-Olmedo

Kullback–Leibler divergence KL(p,q) is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.


2021 ◽  
Author(s):  
Cédric Laruelle ◽  
Romain Boman ◽  
Luc Papeleux ◽  
Jean-Philippe Ponthot

Modeling of Additive Manufacturing (AM) at the part scale involves non-linear thermo-mechanical simulations. Such a process also imposes a very fine discretization and requires altering the geometry of the models during the simulations to model the addition of matter, which is a computational challenge by itself. The first focus of this work is the addition of an additive manufacturing module in the fully implicit in-house Finite Element code Metafor [1] which is developed at the University of Liège. The implemented method to activate elements and to activate and deactivate boundary conditions during a simulation is adapted from the element deletion algorithm implemented in Metafor in the scope of crack propagation [2]. This algorithm is modified to allow the activation of elements based on a user-specified criterion (e.g. geometrical criterion, thermal criterion, etc.). The second objective of this work is to improve the efficiency of the AM simulations, in particular by using a dynamic remeshing strategy to reduce the computational cost of the simulations. This remeshing is done using non-conformal meshes, where hanging nodes are handled via the use of Lagrange multiplier constraints. The mesh data transfer used after remeshing is based on projection methods involving finite volumes [3]. The presented model is then compared against a 2D numerical simulation of Direct Energy Deposition of a High-Speed Steel thick deposit from the literature [4].


Author(s):  
Andrés Cano ◽  
Manuel Gómez-Olmedo ◽  
Serafín Moral ◽  
Serafín Moral-García

Given a set of uncertain discrete variables with a joint probability distribution and a set of observations for some of them, the most probable explanation is a set or configuration of values for non-observed variables maximizing the conditional probability of these variables given the observations. This is a hard problem which can be solved by a deletion algorithm with max marginalization, having a complexity similar to the one of computing conditional probabilities. When this approach is unfeasible, an alternative is to carry out an approximate deletion algorithm, which can be used to guide the search of the most probable explanation, by using A* or branch and bound (the approximate+search approach). The most common approximation procedure has been the mini-bucket approach. In this paper it is shown that the use of probability trees as representation of potentials with a pruning of branches with similar values can improve the performance of this procedure. This is corroborated with an experimental study in which computation times are compared using randomly generated and benchmark Bayesian networks from UAI competitions.


Schwa deletion is important factor for conversion of Grapheme to Phoneme. In Hindi language each consonant has weak vowel. This weak vowel is called as inherent schwa. These schwa is deleted some cases in pronunciation. Written form and speech forms are different in Indian language. Schwa plays important role in speech form. Deletion and retention of weak vowel decides how words are pronounced. Words morphology is main factors that affects pronunciation. In current paper, we describe schwa handling, deletion and retention rules. Based on different rule we developed schwa deletion algorithm. This algorithm has been tested over 6000 high frequency words. We received accuracy result up to 80%. Based on result an application has been developed to provide user interface for the text processing component of text to speech system


2019 ◽  
Vol 11 (5) ◽  
pp. 1293 ◽  
Author(s):  
Zhiyuan Huang ◽  
Ruihua Xu ◽  
Wei Fan ◽  
Feng Zhou ◽  
Wei Liu

To further improve the service quality and reduce safety risks in current congested metro systems during peak hours, this paper presents a load balancing (LB) approach so that available capacity can be utilized more effectively in order to alleviate peak hour congestion. A set of under-utilized yet effective alternative routes were searched using a deletion algorithm (DA) in order to share the passenger loads on overcrowded metro line segments. An optimization model was constructed based on an improved route generalized time utility function considering the penalties of both in-vehicle congestion and transfers. A detailed load balancing solution was generated based on the proposed algorithm. A real-world example of three overloaded metro line segments in the Shanghai metro network were selected and used to verify the feasibility and validity of the developed load balancing method. The results show that the load balancing method can effectively reduce the overcrowding situation to a great extent. Finally, two prospective inducing schemes are discussed to help implement the load balancing solution in the actual metro system in an efficient and effective manner.


Sign in / Sign up

Export Citation Format

Share Document