greedy methods
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Arman Ferdowsi ◽  
Alireza Khanteymoori ◽  
Maryam Dehghan Chenary

In this paper, we introduce a new approach for detecting community structures in networks. The approach is subject to modifying one of the connectivity-based community quality functions based on considering the impact that each community's most influential node has on the other vertices. Utilizing the proposed quality measure, we devise an algorithm that aims to detect high-quality communities of a given network based on two stages: finding a promising initial solution using greedy methods and then refining the solutions in a local search manner. The performance of our algorithm has been evaluated on some standard real-world networks as well as on some artificial networks. The experimental results of the algorithm are reported and compared with several state-of-the-art algorithms. The experiments show that our approach is competitive with the other well-known techniques in the literature and even outperforms them. This approach can be used as a new community detection method in network analysis.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-26
Author(s):  
Timothée Goubault De Brugière ◽  
Marc Baboulin ◽  
Benoît Valiron ◽  
Simon Martiel ◽  
Cyril Allouche

Linear reversible circuits represent a subclass of reversible circuits with many applications in quantum computing. These circuits can be efficiently simulated by classical computers and their size is polynomially bounded by the number of qubits, making them a good candidate to deploy efficient methods to reduce computational costs. We propose a new algorithm for synthesizing any linear reversible operator by using an optimized version of the Gaussian elimination algorithm coupled with a tuned LU factorization. We also improve the scalability of purely greedy methods. Overall, on random operators, our algorithms improve the state-of-the-art methods for specific ranges of problem sizes: The custom Gaussian elimination algorithm provides the best results for large problem sizes (n > 150), while the purely greedy methods provide quasi optimal results when n < 30. On a benchmark of reversible functions, we manage to significantly reduce the CNOT count and the depth of the circuit while keeping other metrics of importance (T-count, T-depth) as low as possible.


2021 ◽  
Author(s):  
Ruhollah Shemirani ◽  
Gillian M Belbin ◽  
Keith Burghardt ◽  
Kristina Lerman ◽  
Christy L Avery ◽  
...  

Background: Groups of distantly related individuals who share a short segment of their genome identical-by-descent (IBD) can provide insights about rare traits and diseases in massive biobanks via a process called IBD mapping. Clustering algorithms play an important role in finding these groups. We set out to analyze the fitness of commonly used, fast and scalable clustering algorithms for IBD mapping applications. We designed a realistic benchmark for local IBD graphs and utilized it to compare clustering algorithms in terms of statistical power. We also investigated the effectiveness of common clustering metrics as replacements for statistical power. Results: We simulated 3.4 million clusters across 850 experiments with varying cluster counts, false-positive, and false-negative rates. Infomap and Markov Clustering (MCL) community detection methods have high statistical power in most of the graphs, compared to greedy methods such as Louvain and Leiden. We demonstrate that standard clustering metrics, such as modularity, cannot predict statistical power of algorithms in IBD mapping applications, though they can help with simulating realistic benchmarks. We extend our findings to real datasets by analyzing 3 populations in the Population Architecture using Genomics and Epidemiology (PAGE) Study with ~51,000 members and 2 million shared segments on Chromosome 1, resulting in the extraction of ~39 million local IBD clusters across three different populations in PAGE. We used cluster properties derived in PAGE to increase the accuracy of our simulations and comparison. Conclusions: Markov Clustering produces a 30% increase in statistical power compared to the current state-of-art approach, while reducing runtime by 3 orders of magnitude; making it computationally tractable in modern large-scale genetic datasets. We provide an efficient implementation to enable clustering at scale for IBD mapping and poplation-based linkage for various populations and scenarios.


2021 ◽  
Vol 15 (2) ◽  
pp. 1-32
Author(s):  
Michael Austin Langford ◽  
Betty H. C. Cheng

Data-driven Learning-enabled Systems are limited by the quality of available training data, particularly when trained offline. For systems that must operate in real-world environments, the space of possible conditions that can occur is vast and difficult to comprehensively predict at design time. Environmental uncertainty arises when run-time conditions diverge from design-time training conditions. To address this problem, automated methods can generate synthetic data to fill in gaps for training and test data coverage. We propose an evolution-based technique to assist developers with uncovering limitations in existing data when previously unseen environmental phenomena are introduced. This technique explores unique contexts for a given environmental condition, with an emphasis on diversity. Synthetic data generated by this technique may be used for two purposes: (1) to assess the robustness of a system to uncertain environmental factors and (2) to improve the system’s robustness. This technique is demonstrated to outperform random and greedy methods for multiple adverse environmental conditions applied to image-processing Deep Neural Networks.


Author(s):  
Giorgos Borboudakis ◽  
Ioannis Tsamardinos

AbstractMost feature selection methods identify only a single solution. This is acceptable for predictive purposes, but is not sufficient for knowledge discovery if multiple solutions exist. We propose a strategy to extend a class of greedy methods to efficiently identify multiple solutions, and show under which conditions it identifies all solutions. We also introduce a taxonomy of features that takes the existence of multiple solutions into account. Furthermore, we explore different definitions of statistical equivalence of solutions, as well as methods for testing equivalence. A novel algorithm for compactly representing and visualizing multiple solutions is also introduced. In experiments we show that (a) the proposed algorithm is significantly more computationally efficient than the TIE* algorithm, the only alternative approach with similar theoretical guarantees, while identifying similar solutions to it, and (b) that the identified solutions have similar predictive performance.


Author(s):  
Kaveh Sheibani

In recent years, there has been a growth of interest in the development of systematic search methods for solving problems in operational research and artificial intelligence. This chapter introduces a new idea for the integration of approaches for hard combinatorial optimisation problems. The proposed methodology evaluates objects in a way that combines fuzzy reasoning with a greedy mechanism. In other words, a fuzzy solution space is exploited using greedy methods. This seems to be superior to the standard greedy version. The chapter consists of two main parts. The first part focuses on description of the theory and mathematics of the so-called fuzzy greedy evaluation concept. The second part demonstrates through computational experiments the effectiveness and efficiency of the proposed concept for hard combinatorial optimisation problems.


2020 ◽  
pp. ijoo.2019.0041
Author(s):  
Rajan Udwani

We consider the problem of multiobjective maximization of monotone submodular functions subject to cardinality constraint, often formulated as [Formula: see text]. Although it is widely known that greedy methods work well for a single objective, the problem becomes much harder with multiple objectives. In fact, it is known that when the number of objectives m grows as the cardinality k, that is, [Formula: see text], the problem is inapproximable (unless P = NP). On the other hand, when m is constant, there exists a a randomized [Formula: see text] approximation with runtime (number of queries to function oracle) the scales as [Formula: see text]. We focus on finding a fast algorithm that has (asymptotic) approximation guarantees even when m is super constant. First, through a continuous greedy based algorithm we give a [Formula: see text] approximation for [Formula: see text]. This demonstrates a steep transition from constant factor approximability to inapproximability around [Formula: see text]. Then using multiplicative-weight-updates (MWUs), we find a much faster [Formula: see text] time asymptotic [Formula: see text] approximation. Although these results are all randomized, we also give a simple deterministic [Formula: see text] approximation with runtime [Formula: see text]. Finally, we run synthetic experiments using Kronecker graphs and find that our MWU inspired heuristic outperforms existing heuristics.


2020 ◽  
Vol 10 (21) ◽  
pp. 7462
Author(s):  
Jesús Enrique Sierra-García ◽  
Matilde Santos

In this work, a pitch controller of a wind turbine (WT) inspired by reinforcement learning (RL) is designed and implemented. The control system consists of a state estimator, a reward strategy, a policy table, and a policy update algorithm. Novel reward strategies related to the energy deviation from the rated power are defined. They are designed to improve the efficiency of the WT. Two new categories of reward strategies are proposed: “only positive” (O-P) and “positive-negative” (P-N) rewards. The relationship of these categories with the exploration-exploitation dilemma, the use of ϵ-greedy methods and the learning convergence are also introduced and linked to the WT control problem. In addition, an extensive analysis of the influence of the different rewards in the controller performance and in the learning speed is carried out. The controller is compared with a proportional-integral-derivative (PID) regulator for the same small wind turbine, obtaining better results. The simulations show how the P-N rewards improve the performance of the controller, stabilize the output power around the rated power, and reduce the error over time.


2019 ◽  
Vol 9 (3) ◽  
pp. 4154-4158
Author(s):  
D. A. Pamplona ◽  
C. J. P. Alves

Congestion is a problem at major airports in the world. Airports, especially high-traffic ones, tend to be the bottleneck in the air traffic control system. The problem that arises for the airspace planner is how to mitigate air congestion and its consequent delay, which causes increased cost for airlines and discomfort for passengers. Most congestion problems are fixed on the day of operations in a tactically manner using operational enhancements measures. Collaborative Trajectory Options Program (CTOP) aims to improve air traffic management (ATM) considering National Airspace System (NAS) users business goals, particularities faced by each flight and airspace restrictions, making this process more flexible and financially stable for those involved. In CTOP, airlines share their route preferences with the air control authority, combining delay and reroute. When CTOP is created, each airline might decide its strategy without knowledge of other airline’s flights. Current solutions for this problem are based on greedy methods and game theory. There is potential space to improve. This paper examines CTOP and identifies important strategic changes to ATM adopting this philosophy, particularly in Brazil.


Sign in / Sign up

Export Citation Format

Share Document