decomposition problem
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 25)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 7 (4) ◽  
pp. 1-37
Author(s):  
Serafino Cicerone ◽  
Mattia D’emidio ◽  
Daniele Frigioni ◽  
Filippo Tirabassi Pascucci

The cavity decomposition problem is a computational geometry problem, arising in the context of modern electronic CAD systems, that concerns detecting the generation and propagation of electromagnetic noise into multi-layer printed circuit boards. Algorithmically speaking, the problem can be formulated so as to contain, as sub-problems, the well-known polygon schematization and polygon decomposition problems. Given a polygon P and a finite set C of given directions, polygon schematization asks for computing a C -oriented polygon P ′ with “low complexity” and “high resemblance” to P , whereas polygon decomposition asks for partitioning P into a set of basic polygonal elements (e.g., triangles) whose size is as small as possible. In this article, we present three different solutions for the cavity decomposition problem, which are obtained by suitably combining existing algorithms for polygon schematization and decomposition, by considering different input parameters, and by addressing both methodological and implementation issues. Since it is difficult to compare the three solutions on a theoretical basis, we present an extensive experimental study, employing both real-world and random data, conducted to assess their performance. We rank the proposed solutions according to the results of the experimental evaluation, and provide insights on natural candidates to be adopted, in practice, as modules of modern printed circuit board design software tools, depending on the observed performance and on the different constraints on the desired output.


Author(s):  
Ali Kaveh ◽  
Mohammad Reza Seddighian ◽  
Pouya Hassani

In this paper, an automatic data clustering approach is presented using some concepts of the graph theory. Some Cluster Validity Index (CVI) is mentioned, and DB Index is defined as the objective function of meta-heuristic algorithms. Six Finite Element meshes are decomposed containing two- and three- dimensional types that comprise simple and complex meshes. Six meta-heuristic algorithms are utilized to determine the optimal number of clusters and minimize the decomposition problem. Finally, corresponding statistical results are compared.


Author(s):  
Vasyl Ustimenko ◽  
Oleksandr Pustovit

Multivariate cryptography (MC) together with Latice Based, Hash based, Code based and Superelliptic curves based Cryptographies form list of the main directions of Post Quantum Cryptography.Investigations in the framework of tender of National Institute of Standardisation Technology (the USA) indicates that the potential of classical MC working with nonlinear maps of bounded degree and without the usage of compositions of nonlinear transformation is very restricted. Only special case of Rainbow like Unbalanced Oil and Vinegar digital signatures is remaining for further consideration. The remaining public keys for encryption procedure are not of multivariate. nature. The paper presents large semigroups and groups of transformations of finite affine space of dimension n with the multiple composition property. In these semigroups the composition of n transformations is computable in polynomial time. Constructions of such families are given together with effectively computed homomorphisms between members of the family. These algebraic platforms allow us to define protocols for several generators of subsemigroup of affine Cremona semigroups with several outputs. Security of these protocols rests on the complexity of the word decomposition problem, Finally presented algebraic protocols expanded to cryptosystems of El Gamal type which is not a public key system.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Liangjie Ming ◽  
Yunong Zhang ◽  
Jinjin Guo ◽  
Xiao Liu ◽  
Zhonghua Li

In this paper, by employing the Zhang neural network (ZNN) method, an effective continuous-time LU decomposition (CTLUD) model is firstly proposed, analyzed, and investigated for solving the time-varying LU decomposition problem. Then, for the convenience of digital hardware realization, this paper proposes three discrete-time models by using Euler, 4-instant Zhang et al. discretization (ZeaD), and 8-instant ZeaD formulas to discretize the proposed CTLUD model, respectively. Furthermore, the proposed models are used to perform the LU decomposition of three time-varying matrices with different dimensions. Results indicate that the proposed models are effective for solving the time-varying LU decomposition problem, and the 8-instant ZeaD LU decomposition model has the highest precision among the three discrete-time models.


2021 ◽  
Vol 28 (1) ◽  
pp. 6-21
Author(s):  
Alexander V. Korostil ◽  
Andrei V. Nikolaev

We consider a Hamiltonian decomposition problem of partitioning a regular graph into edge-disjoint Hamiltonian cycles. It is known that verifying vertex non-adjacency in the 1-skeleton of the symmetric and asymmetric traveling salesperson polytopes is an NP-complete problem. On the other hand, a suffcient condition for two vertices to be non-adjacent can be formulated as a combinatorial problem of finding a Hamiltonian decomposition of a 4-regular multigraph. We present two backtracking algorithms for verifying vertex non-adjacency in the 1-skeleton of the traveling salesperson polytope and constructing a Hamiltonian decomposition: an algorithm based on a simple path extension and an algorithm based on the chain edge fixing procedure. Based on the results of the computational experiments for undirected multigraphs, both backtracking algorithms lost to the known heuristic general variable neighborhood search algorithm. However, for directed multigraphs, the algorithm based on chain fixing of edges showed comparable results with heuristics on instances with existing solutions, and better results on instances of the problem where the Hamiltonian decomposition does not exist.


2021 ◽  
pp. 1-35
Author(s):  
Rishabh Singh ◽  
Jose C. Principe

This letter introduces a new framework for quantifying predictive uncertainty for both data and models that rely on projecting the data into a gaussian reproducing kernel Hilbert space (RKHS) and transforming the data probability density function (PDF) in a way that quantifies the flow of its gradient as a topological potential field quantified at all points in the sample space. This enables the decomposition of the PDF gradient flow by formulating it as a moment decomposition problem using operators from quantum physics, specifically Schrödinger's formulation. We experimentally show that the higher-order modes systematically cluster the different tail regions of the PDF, thereby providing unprecedented discriminative resolution of data regions having high epistemic uncertainty. In essence, this approach decomposes local realizations of the data PDF in terms of uncertainty moments. We apply this framework as a surrogate tool for predictive uncertainty quantification of point-prediction neural network models, overcoming various limitations of conventional Bayesian-based uncertainty quantification methods. Experimental comparisons with some established methods illustrate performance advantages that our framework exhibits.


Sign in / Sign up

Export Citation Format

Share Document