scholarly journals Improving Information Centrality of a Node in Complex Networks by Adding Edges

Author(s):  
Liren Shan ◽  
Yuhao Yi ◽  
Zhongzhi Zhang

The problem of increasing the centrality of a network node arises in many practical applications. In this paper, we study the optimization problem of maximizing the information centrality Iv of a given node v in a network with n nodes and m edges, by creating k new edges incident to v. Since Iv is the reciprocal of the sum of resistance distance Rv between v and all nodes, we alternatively consider the problem of minimizing Rv by adding k new edges linked to v. We show that the objective function is monotone and supermodular. We provide a simple greedy algorithm with an approximation factor (1 − 1/e) and O(n^3) running time. To speed up the computation, we also present an algorithm to compute (1 − 1/e − epsilon) approximate resistance distance Rv after iteratively adding k edges, the running time of which is Otilde(mk*epsilon^−2) for any epsilon > 0, where the Otilde(·) notation suppresses the poly(log n) factors. We experimentally demonstrate the effectiveness and efficiency of our proposed algorithms.

Author(s):  
Yi Zhang ◽  
Miaomiao Li ◽  
Siwei Wang ◽  
Sisi Dai ◽  
Lei Luo ◽  
...  

Gaussian mixture model (GMM) clustering has been extensively studied due to its effectiveness and efficiency. Though demonstrating promising performance in various applications, it cannot effectively address the absent features among data, which is not uncommon in practical applications. In this article, different from existing approaches that first impute the absence and then perform GMM clustering tasks on the imputed data, we propose to integrate the imputation and GMM clustering into a unified learning procedure. Specifically, the missing data is filled by the result of GMM clustering, and the imputed data is then taken for GMM clustering. These two steps alternatively negotiate with each other to achieve optimum. By this way, the imputed data can best serve for GMM clustering. A two-step alternative algorithm with proved convergence is carefully designed to solve the resultant optimization problem. Extensive experiments have been conducted on eight UCI benchmark datasets, and the results have validated the effectiveness of the proposed algorithm.


2013 ◽  
Vol 23 (1) ◽  
pp. 43-57 ◽  
Author(s):  
Aleksandar Kartelj

In this paper electromagnetism (EM) metaheuristic is used for solving the NP-hard strong minimum energy topology problem (SMETP). Objective function is adapted to the problem so that it effectively prevents infeasible solutions. Proposed EM algorithm uses efficient local search to speed up overall running time. This approach is tested on two sets of randomly generated symmetric and asymmetric instances. EM reaches all known optimal solutions for these instances. The solutions are obtained in a reasonable running time even for the problem instances of higher dimensions.


2018 ◽  
Vol 28 (2) ◽  
pp. 153-169 ◽  
Author(s):  
Kayo Gonçalves-E-Silva ◽  
Daniel Aloise ◽  
Samuel Xavier-De-Souza ◽  
Nenad Mladenovic

Nelder-Mead method (NM) for solving continuous non-linear optimization problem is probably the most cited and the most used method in the optimization literature and in practical applications, too. It belongs to the direct search methods, those which do not use the first and the second order derivatives. The popularity of NM is based on its simplicity. In this paper we propose even more simple algorithm for larger instances that follows NM idea. We call it Simplified NM (SNM): instead of generating all n + 1 simplex points in Rn, we perform search using just q + 1 vertices, where q is usually much smaller than n. Though the results cannot be better than after performing calculations in n+1 points as in NM, significant speed-up allows to run many times SNM from different starting solutions, usually getting better results than those obtained by NM within the same cpu time. Computational analysis is performed on 10 classical convex and non-convex instances, where the number of variables n can be arbitrarily large. The obtained results show that SNM is more effective than the original NM, confirming that LIMA yields good results when solving a continuous optimization problem.


2021 ◽  
Vol 13 (8) ◽  
pp. 204
Author(s):  
Oscar Rojas ◽  
Veronica Gil-Costa ◽  
Mauricio Marin

Web search engines are built from components capable of processing large amounts of user queries per second in a distributed way. Among them, the index service computes the top-k documents that best match each incoming query by means of a document ranking operation. To achieve high performance, dynamic pruning techniques such as the WAND and BM-WAND algorithms are used to avoid fully processing all of the documents related to a query during the ranking operation. Additionally, the index service distributes the ranking operations among clusters of processors wherein in each processor multi-threading is applied to speed up query solution. In this scenario, a query running time prediction algorithm has practical applications in the efficient assignment of processors and threads to incoming queries. We propose a prediction algorithm for the WAND and BM-WAND algorithms. We experimentally show that our proposal is able to achieve accurate prediction results while significantly reducing execution time and memory consumption as compared against an alternative prediction algorithm. Our proposal applies the discrete Fourier transform (DFT) to represent key features affecting query running time whereas the resulting vectors are used to train a feed-forward neural network with back-propagation.


10.29007/2k64 ◽  
2018 ◽  
Author(s):  
Pat Prodanovic ◽  
Cedric Goeury ◽  
Fabrice Zaoui ◽  
Riadh Ata ◽  
Jacques Fontaine ◽  
...  

This paper presents a practical methodology developed for shape optimization studies of hydraulic structures using environmental numerical modelling codes. The methodology starts by defining the optimization problem and identifying relevant problem constraints. Design variables in shape optimization studies are configuration of structures (such as length or spacing of groins, orientation and layout of breakwaters, etc.) whose optimal orientation is not known a priori. The optimization problem is solved numerically by coupling an optimization algorithm to a numerical model. The coupled system is able to define, test and evaluate a multitude of new shapes, which are internally generated and then simulated using a numerical model. The developed methodology is tested using an example of an optimum design of a fish passage, where the design variables are the length and the position of slots. In this paper an objective function is defined where a target is specified and the numerical optimizer is asked to retrieve the target solution. Such a definition of the objective function is used to validate the developed tool chain. This work uses the numerical model TELEMAC- 2Dfrom the TELEMAC-MASCARET suite of numerical solvers for the solution of shallow water equations, coupled with various numerical optimization algorithms available in the literature.


Author(s):  
Jing Tang ◽  
Xueyan Tang ◽  
Andrew Lim ◽  
Kai Han ◽  
Chongshou Li ◽  
...  

Monotone submodular maximization with a knapsack constraint is NP-hard. Various approximation algorithms have been devised to address this optimization problem. In this paper, we revisit the widely known modified greedy algorithm. First, we show that this algorithm can achieve an approximation factor of 0.405, which significantly improves the known factors of 0.357 given by Wolsey and (1-1/e)/2\approx 0.316 given by Khuller et al. More importantly, our analysis closes a gap in Khuller et al.'s proof for the extensively mentioned approximation factor of (1-1/\sqrte )\approx 0.393 in the literature to clarify a long-standing misconception on this issue. Second, we enhance the modified greedy algorithm to derive a data-dependent upper bound on the optimum. We empirically demonstrate the tightness of our upper bound with a real-world application. The bound enables us to obtain a data-dependent ratio typically much higher than 0.405 between the solution value of the modified greedy algorithm and the optimum. It can also be used to significantly improve the efficiency of algorithms such as branch and bound.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-20
Author(s):  
Serena Wang ◽  
Maya Gupta ◽  
Seungil You

Given a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering.


1991 ◽  
Vol 15 (3-4) ◽  
pp. 357-379
Author(s):  
Tien Huynh ◽  
Leo Joskowicz ◽  
Catherine Lassez ◽  
Jean-Louis Lassez

We address the problem of building intelligent systems to reason about linear arithmetic constraints. We develop, along the lines of Logic Programming, a unifying framework based on the concept of Parametric Queries and a quasi-dual generalization of the classical Linear Programming optimization problem. Variable (quantifier) elimination is the key underlying operation which provides an oracle to answer all queries and plays a role similar to Resolution in Logic Programming. We discuss three methods for variable elimination, compare their feasibility, and establish their applicability. We then address practical issues of solvability and canonical representation, as well as dynamical updates and feedback. In particular, we show how the quasi-dual formulation can be used to achieve the discriminating characteristics of the classical Fourier algorithm regarding solvability, detection of implicit equalities and, in case of unsolvability, the detection of minimal unsolvable subsets. We illustrate the relevance of our approach with examples from the domain of spatial reasoning and demonstrate its viability with empirical results from two practical applications: computation of canonical forms and convex hull construction.


Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Teng Li ◽  
Huan Chang ◽  
Jun Wu

This paper presents a novel algorithm to numerically decompose mixed signals in a collaborative way, given supervision of the labels that each signal contains. The decomposition is formulated as an optimization problem incorporating nonnegative constraint. A nonnegative data factorization solution is presented to yield the decomposed results. It is shown that the optimization is efficient and decreases the objective function monotonically. Such a decomposition algorithm can be applied on multilabel training samples for pattern classification. The real-data experimental results show that the proposed algorithm can significantly facilitate the multilabel image classification performance with weak supervision.


Sign in / Sign up

Export Citation Format

Share Document