greedy heuristics
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 22)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 18 (1) ◽  
pp. 1-41
Author(s):  
Pamela Bezerra ◽  
Po-Yu Chen ◽  
Julie A. McCann ◽  
Weiren Yu

As sensor-based networks become more prevalent, scaling to unmanageable numbers or deployed in difficult to reach areas, real-time failure localisation is becoming essential for continued operation. Network tomography, a system and application-independent approach, has been successful in localising complex failures (i.e., observable by end-to-end global analysis) in traditional networks. Applying network tomography to wireless sensor networks (WSNs), however, is challenging. First, WSN topology changes due to environmental interactions (e.g., interference). Additionally, the selection of devices for running network monitoring processes (monitors) is an NP-hard problem. Monitors observe end-to-end in-network properties to identify failures, with their placement impacting the number of identifiable failures. Since monitoring consumes more in-node resources, it is essential to minimise their number while maintaining network tomography’s effectiveness. Unfortunately, state-of-the-art solutions solve this optimisation problem using time-consuming greedy heuristics. In this article, we propose two solutions for efficiently applying Network Tomography in WSNs: a graph compression scheme, enabling faster monitor placement by reducing the number of edges in the network, and an adaptive monitor placement algorithm for recovering the monitor placement given topology changes. The experiments show that our solution is at least 1,000× faster than the state-of-the-art approaches and efficiently copes with topology variations in large-scale WSNs.


Author(s):  
Gayathri Devi K

Abstract: Job shop scheduling has always been one of the most sought out research problems in Combinatorial optimization. Job Shop Scheduling problems (JSSP) are categorized under NP hard problems. In recent years the meta heuristic algorithms have been proved effective to solve hardcore NP problem. Firefly Algorithm is one of such meta heuristic techniques which is nature inspired from firefly characteristic. Its potential can be enhanced further by hybridizing it with other known evolutionary algorithms and thereby getting improved results in less computational time. In this paper we have proposed a new hybrid technique christened as HyFA, by hybridizing Firefly algorithm(FA) with simulated annealing (SA) and Greedy heuristics approach (GHA). The hybrid technique has the advantages of all three algorithms and are combined in such a way that a quicker and better optimal solution is obtained. Our proposed HyFA is coded in Matlab with an objective to minimize the makespan (Cm). The novel hybrid technique is then used to evaluate 1-25 Lawrence problems taken from literature. The results show the proposed technique is more effective not only in getting optimal results but has significantly reduced computational time. Keywords: Scheduling, Optimisation, Job shop scheduling, Meta-heuristics, Firefly, Simulated Annealing, Greedy heuristics Approach.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Michael Canesche ◽  
Westerley Carvalho ◽  
Lucas Reis ◽  
Matheus Oliveira ◽  
Salles Magalhães ◽  
...  

Coarse-grained reconfigurable architecture (CGRA) mapping involves three main steps: placement, routing, and timing. The mapping is an NP-complete problem, and a common strategy is to decouple this process into its independent steps. This work focuses on the placement step, and its aim is to propose a technique that is both reasonably fast and leads to high-performance solutions. Furthermore, a near-optimal placement simplifies the following routing and timing steps. Exact solutions cannot find placements in a reasonable execution time as input designs increase in size. Heuristic solutions include meta-heuristics, such as Simulated Annealing (SA) and fast and straightforward greedy heuristics based on graph traversal. However, as these approaches are probabilistic and have a large design space, it is not easy to provide both run-time efficiency and good solution quality. We propose a graph traversal heuristic that provides the best of both: high-quality placements similar to SA and the execution time of graph traversal approaches. Our placement introduces novel ideas based on “you only traverse twice” (YOTT) approach that performs a two-step graph traversal. The first traversal generates annotated data to guide the second step, which greedily performs the placement, node per node, aided by the annotated data and target architecture constraints. We introduce three new concepts to implement this technique: I/O and reconvergence annotation, degree matching, and look-ahead placement. Our analysis of this approach explores the placement execution time/quality trade-offs. We point out insights on how to analyze graph properties during dataflow mapping. Our results show that YOTT is 60.6 , 9.7 , and 2.3 faster than a high-quality SA, bounding box SA VPR, and multi-single traversal placements, respectively. Furthermore, YOTT reduces the average wire length and the maximal FIFO size (additional timing requirement on CGRAs) to avoid delay mismatches in fully pipelined architectures.


Author(s):  
Lucas Martínez-Bernabéu ◽  
José Manuel Casado-Díaz

Labour market areas (LMAs) are a type of functional region (FR) defined on commuting flows and used in many countries to serve as the territorial reference for regional studies and policy making at local levels. Existing methods rely on manual adjustments of the results to ensure high quality, making them difficult to be monitored, hard to apply to different territories, and onerous to produce in terms of required work-hours. We propose an approach to automatise all stages of the delineation procedure and improve the final results, building upon a state-of-the-art stochastic search procedure that ensures optimal allocation of municipalities/counties to LMAs while keeping good global indicators: a pre-processing layer clusters adjoining municipalities with strong commuting flows to constrain the initial search space of the stochastic search, and a multi-criteria heuristic corrects common deficiencies that derive from global maximisation approaches or simple greedy heuristics. It produces high quality LMAs with optimal local characteristics. To demonstrate this methodology and assess the improvement achieved, we apply it to define LMAs in Spain based on the latest commuting data.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 79
Author(s):  
Salim Bouamama ◽  
Christian Blum

This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX.


Author(s):  
Christoph Gebhardt ◽  
Antti Oulasvirta ◽  
Otmar Hilliges

Abstract How do people decide how long to continue in a task, when to switch, and to which other task? It is known that task interleaving adapts situationally, showing sensitivity to changes in expected rewards, costs, and task boundaries. However, the mechanisms that underpin the decision to stay in a task versus switch away are not thoroughly understood. Previous work has explained task interleaving by greedy heuristics and a policy that maximizes the marginal rate of return. However, it is unclear how such a strategy would allow for adaptation to environments that offer multiple tasks with complex switch costs and delayed rewards. Here, we develop a hierarchical model of supervisory control driven by reinforcement learning (RL). The core assumption is that the supervisory level learns to switch using task-specific approximate utility estimates, which are computed on the lower level. We show that a hierarchically optimal value function decomposition can be learned from experience, even in conditions with multiple tasks and arbitrary and uncertain reward and cost structures. The model also reproduces well-known key phenomena of task interleaving, such as the sensitivity to costs of resumption and immediate as well as delayed in-task rewards. In a demanding task interleaving study with 211 human participants and realistic tasks (reading, mathematics, question-answering, recognition), the model yielded better predictions of individual-level data than a flat (non-hierarchical) RL model and an omniscient-myopic baseline. Corroborating emerging evidence from cognitive neuroscience, our results suggest hierarchical RL as a plausible model of supervisory control in task interleaving.


Author(s):  
Vaibhav Rajan ◽  
Ziqi Zhang ◽  
Carl Kingsford ◽  
Xiuwei Zhang

Abstract Motivation The study of the evolutionary history of biological networks enables deep functional understanding of various bio-molecular processes. Network growth models, such as the Duplication–Mutation with Complementarity (DMC) model, provide a principled approach to characterizing the evolution of protein–protein interactions (PPIs) based on duplication and divergence. Current methods for model-based ancestral network reconstruction primarily use greedy heuristics and yield sub-optimal solutions. Results We present a new Integer Linear Programming (ILP) solution for maximum likelihood reconstruction of ancestral PPI networks using the DMC model. We prove the correctness of our solution that is designed to find the optimal solution. It can also use efficient heuristics from general-purpose ILP solvers to obtain multiple optimal and near-optimal solutions that may be useful in many applications. Experiments on synthetic data show that our ILP obtains solutions with higher likelihood than those from previous methods, and is robust to noise and model mismatch. We evaluate our algorithm on two real PPI networks, with proteins from the families of bZIP transcription factors and the Commander complex. On both the networks, solutions from our ILP have higher likelihood and are in better agreement with independent biological evidence from other studies. Availability and implementation A Python implementation is available at https://bitbucket.org/cdal/network-reconstruction. Contact [email protected] Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Vol 21 (S6) ◽  
Author(s):  
Sriram P. Chockalingam ◽  
Jodh Pannu ◽  
Sahar Hooshmand ◽  
Sharma V. Thankachan ◽  
Srinivas Aluru

Abstract Background Alignment-free methods for sequence comparisons have become popular in many bioinformatics applications, specifically in the estimation of sequence similarity measures to construct phylogenetic trees. Recently, the average common substring measure, ACS, and its k-mismatch counterpart, ACSk, have been shown to produce results as effective as multiple-sequence alignment based methods for reconstruction of phylogeny trees. Since computing ACSk takes O(n logkn) time and hence impractical for large datasets, multiple heuristics that can approximate ACSk have been introduced. Results In this paper, we present a novel linear-time heuristic to approximate ACSk, which is faster than computing the exact ACSk while being closer to the exact ACSk values compared to previously published linear-time greedy heuristics. Using four real datasets, containing both DNA and protein sequences, we evaluate our algorithm in terms of accuracy, runtime and demonstrate its applicability for phylogeny reconstruction. Our algorithm provides better accuracy than previously published heuristic methods, while being comparable in its applications to phylogeny reconstruction. Conclusions Our method produces a better approximation for ACSk and is applicable for the alignment-free comparison of biological sequences at highly competitive speed. The algorithm is implemented in Rust programming language and the source code is available at https://github.com/srirampc/adyar-rs.


2020 ◽  
Vol 68 (5) ◽  
pp. 1517-1537 ◽  
Author(s):  
Hussein Hazimeh ◽  
Rahul Mazumder

In several scientific and industrial applications, it is desirable to build compact, interpretable learning models where the output depends on a small number of input features. Recent work has shown that such best-subset selection-type problems can be solved with modern mixed integer optimization solvers. Despite their promise, such solvers often come at a steep computational price when compared with open-source, efficient specialized solvers based on convex optimization and greedy heuristics. In “Fast Best-Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms,” Hussein Hazimeh and Rahul Mazumder push the frontiers of computation for best-subset-type problems. Their algorithms deliver near-optimal solutions for problems with up to a million features—in times comparable with the fast convex solvers. Their work suggests that principled optimization methods play a key role in devising tools central to interpretable machine learning, which can help in gaining a deeper understanding of their statistical properties.


2020 ◽  
Vol 12 (14) ◽  
pp. 2285 ◽  
Author(s):  
Joshua E. Hammond ◽  
Cory A. Vernon ◽  
Trent J. Okeson ◽  
Benjamin J. Barrett ◽  
Samuel Arce ◽  
...  

Remote sensing with unmanned aerial vehicles (UAVs) facilitates photogrammetry for environmental and infrastructural monitoring. Models are created with less computational cost by reducing the number of photos required. Optimal camera locations for reducing the number of photos needed for structure-from-motion (SfM) are determined through eight mathematical set-covering algorithms as constrained by solve time. The algorithms examined are: traditional greedy, reverse greedy, carousel greedy (CG), linear programming, particle swarm optimization, simulated annealing, genetic, and ant colony optimization. Coverage and solve time are investigated for these algorithms. CG is the best method for choosing optimal camera locations as it balances number of photos required and time required to calculate camera positions as shown through an analysis similar to a Pareto Front. CG obtains a statistically significant 3.2 fewer cameras per modeled area than base greedy algorithm while requiring just one additional order of magnitude of solve time. For comparison, linear programming is capable of fewer cameras than base greedy but takes at least three orders of magnitude longer to solve. A grid independence study serves as a sensitivity analysis of the CG algorithms α (iteration number) and β (percentage to be recalculated) parameters that adjust traditional greedy heuristics, and a case study at the Rock Canyon collection dike in Provo, UT, USA, compares the results of all eight algorithms and the uniqueness (in terms of percentage comparisons based on location/angle metadata and qualitative visual comparison) of each selected set. Though this specific study uses SfM, the principles could apply to other instruments such as multi-spectral cameras or aerial LiDAR.


Sign in / Sign up

Export Citation Format

Share Document