scholarly journals Wilkinson’s Bus: Weak Condition Numbers, with an Application to Singular Polynomial Eigenproblems

2020 ◽  
Vol 20 (6) ◽  
pp. 1439-1473
Author(s):  
Martin Lotz ◽  
Vanni Noferini

AbstractWe propose a new approach to the theory of conditioning for numerical analysis problems for which both classical and stochastic perturbation theories fail to predict the observed accuracy of computed solutions. To motivate our ideas, we present examples of problems that are discontinuous at a given input and even have infinite stochastic condition number, but where the solution is still computed to machine precision without relying on structured algorithms. Stimulated by the failure of classical and stochastic perturbation theory in capturing such phenomena, we define and analyse a weak worst-case and a weak stochastic condition number. This new theory is a more powerful predictor of the accuracy of computations than existing tools, especially when the worst-case and the expected sensitivity of a problem to perturbations of the input is not finite. We apply our analysis to the computation of simple eigenvalues of matrix polynomials, including the more difficult case of singular matrix polynomials. In addition, we show how the weak condition numbers can be estimated in practice.

2019 ◽  
Vol 9 (1) ◽  
pp. 1-32
Author(s):  
Vincent Roulet ◽  
Nicolas Boumal ◽  
Alexandre d’Aspremont

Abstract We show that several classical quantities controlling compressed-sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed-sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar’s condition number is a data-driven complexity measure for convex programmes, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed-sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed-sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.


2019 ◽  
Vol 6 (1) ◽  
pp. 205316801983208 ◽  
Author(s):  
Cesar Zucco ◽  
Mariana Batista ◽  
Timothy J. Power

How do political actors value different portfolios? We propose a new approach to measuring portfolio salience by analysing paired comparisons using the Bradley–Terry model. Paired-comparison data are easy to collect using surveys that are user-friendly, rapid, and inexpensive. We implement the approach with serving legislators in Brazil, a particularly difficult case to assess portfolio salience due to the large number of cabinet positions. Our estimates of portfolio values are robust to variations in implementation of the method. Legislators and academics have broadly similar views of the relative worth of cabinet posts. Respondent valuations of portfolios deviate considerably from what would be predicted by objective measures such as budget, policy influence, and opportunities for patronage. Substantively, we show that portfolio salience varies greatly and affects the calculation of formateur advantage and coalescence/proportionality rule measures.


2005 ◽  
Vol 128 (4) ◽  
pp. 874-883 ◽  
Author(s):  
Mian Li ◽  
Shapour Azarm ◽  
Art Boyars

We present a deterministic non-gradient based approach that uses robustness measures in multi-objective optimization problems where uncontrollable parameter variations cause variation in the objective and constraint values. The approach is applicable for cases that have discontinuous objective and constraint functions with respect to uncontrollable parameters, and can be used for objective or feasibility robust optimization, or both together. In our approach, the known parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.


Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 183
Author(s):  
Canh V. Pham ◽  
Dung K. T. Ha ◽  
Quang C. Vu ◽  
Anh N. Su ◽  
Huan X. Hoang

The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1−(1−1k)t-approximation solution with t is an outcome of algorithm and t≥1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1−(1−1k)t−ϵ-approximation solution with probability at least 1−δ with ϵ>0,δ∈(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others.


2005 ◽  
Vol 128 (1) ◽  
pp. 199-206 ◽  
Author(s):  
J. P. Merlet

Although the concepts of Jacobian matrix, manipulability, and condition number have existed since the very early beginning of robotics their real significance is not always well understood. In this paper we revisit these concepts for parallel robots as accuracy indices in view of optimal design. We first show that the usual Jacobian matrix derived from the input-output velocities equations may not be sufficient to analyze the positioning errors of the platform. We then examine the concept of manipulability and show that its classical interpretation is erroneous. We then consider various common local dexterity indices, most of which are based on the condition number of the Jacobian matrix. It is emphasized that even for a given robot in a particular pose there are a variety of condition numbers and that their values are not coherent between themselves but also with what we may expect from an accuracy index. Global conditioning indices are then examined. Apart from the problem of being based on the local accuracy indices that are questionable, there is a computational problem in their calculation that is neglected most of the time. Finally, we examine what other indices may be used for optimal design and show that their calculation is most challenging.


Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. F123-F137 ◽  
Author(s):  
M. Zaslavsky ◽  
V. Druskin ◽  
S. Davydycheva ◽  
L. Knizhnerman ◽  
A. Abubakar ◽  
...  

The modeling of the controlled-source electromagnetic (CSEM) and single-well and crosswell electromagnetic (EM) configurations requires fine gridding to take into account the 3D nature of the geometries encountered in these applications that include geological structures with complicated shapes and exhibiting large variations in conductivities such as the seafloor bathymetry, the land topography, and targets with complex geometries and large contrasts in conductivities. Such problems significantly increase the computational cost of the conventional finite-difference (FD) approaches mainly due to the large condition numbers of the corresponding linear systems. To handle these problems, we employ a volume integral equation (IE) approach to arrive at an effective preconditioning operator for our FD solver. We refer to this new hybrid algorithm as the finite-difference integral equation method (FDIE). This FDIE preconditioning operator is divergence free and is based on a magnetic field formulation. Similar to the Lippman-Schwinger IE method, this scheme allows us to use a background elimination approach to reduce the computational domain, resulting in a smaller size stiffness matrix. Furthermore, it yields a linear system whose condition number is close to that of the conventional Lippman-Schwinger IE approach, significantly reducing the condition number of the stiffness matrix of the FD solver. Moreover, the FD framework allows us to substitute convolution operations by the inversion of banded matrices, which significantly reduces the computational cost per iteration of the hybrid method compared to the standard IE approaches. Also, well-established FD homogenization and optimal gridding algorithms make the FDIE more appropriate for the discretization of strongly inhomogeneous media. Some numerical studies are presented to illustrate the accuracy and effectiveness of the presented solver for CSEM, single-well, and crosswell EM applications.


2018 ◽  
Vol 25 (3) ◽  
pp. 251-256
Author(s):  
Sergey V. Morzhov ◽  
Mikhail A. Nikitinskiy

In this paper, the authors analyze the developed PreFirewall network application for the Floodlight software defined network (SDN) controller. This application filters rules, which are added into the firewall module of the Floodlight SDN controller in order to prevent the occurrence of anomalies among them. The rule filtering method is based on determining whether the addition of a new rule will not cause any anomalies with already added ones. If an anomaly was detected while adding the new rule, PreFirewall application should be able to resolve it and must report the detection of the anomaly. The developed network application PreFirewall passed a number of tests. As a result of the stress testing, it was found that the time of adding a new rule, when using PreFirewall, substantially increases with increase in the number of previously processed rules. Analysis of the network application PreFirewall showed that while adding a rule (the most frequent operation), in the worst case it is necessary to compare it with all existing rules, which are stored as a two-dimensional array. Thus, the operation of adding a new rule is the most time-consuming and has the greatest impact on the performance of the network application, which leads to an increase in response time. A possible way to of solving this problem is to select a data structure used to store the rules, in which the operation of adding a new rule would be simple. After analyzing the structure of the policy rules for the Floodlight SDN controller, the authors noted that a tree is the most adequate data structure for its storage. It provides optimization of memory used for storing the rules and, more important, it allows to achieve the constant complexity of the operation of adding a new rule and, consequently, solving the performance problem of the network application PreFirewall. The article is published in the authors’ wording.


Sign in / Sign up

Export Citation Format

Share Document