np hard problem
Recently Published Documents


TOTAL DOCUMENTS

398
(FIVE YEARS 183)

H-INDEX

13
(FIVE YEARS 3)

2022 ◽  
Vol 13 (2) ◽  
pp. 0-0

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-22
Author(s):  
Sarab Almuhaideb ◽  
Najwa Altwaijry ◽  
Shahad AlMansour ◽  
Ashwaq AlMklafi ◽  
AlBandery Khalid AlMojel ◽  
...  

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 274
Author(s):  
Álvaro Gómez-Rubio ◽  
Ricardo Soto ◽  
Broderick Crawford ◽  
Adrián Jaramillo ◽  
David Mancilla ◽  
...  

In the world of optimization, especially concerning metaheuristics, solving complex problems represented by applying big data and constraint instances can be difficult. This is mainly due to the difficulty of implementing efficient solutions that can solve complex optimization problems in adequate time, which do exist in different industries. Big data has demonstrated its efficiency in solving different concerns in information management. In this paper, an approach based on multiprocessing is proposed wherein clusterization and parallelism are used together to improve the search process of metaheuristics when solving large instances of complex optimization problems, incorporating collaborative elements that enhance the quality of the solution. The proposal deals with machine learning algorithms to improve the segmentation of the search space. Particularly, two different clustering methods belonging to automatic learning techniques, are implemented on bio-inspired algorithms to smartly initialize their solution population, and then organize the resolution from the beginning of the search. The results show that this approach is competitive with other techniques in solving a large set of cases of a well-known NP-hard problem without incorporating too much additional complexity into the metaheuristic algorithms.


Animals ◽  
2022 ◽  
Vol 12 (2) ◽  
pp. 201
Author(s):  
Maoxuan Miao ◽  
Jinran Wu ◽  
Fengjing Cai ◽  
You-Gan Wang

Selecting the minimal best subset out of a huge number of factors for influencing the response is a fundamental and very challenging NP-hard problem because the presence of many redundant genes results in over-fitting easily while missing an important gene can more detrimental impact on predictions, and computation is prohibitive for exhaust search. We propose a modified memetic algorithm (MA) based on an improved splicing method to overcome the problems in the traditional genetic algorithm exploitation capability and dimension reduction in the predictor variables. The new algorithm accelerates the search in identifying the minimal best subset of genes by incorporating it into the new local search operator and hence improving the splicing method. The improvement is also due to another two novel aspects: (a) updating subsets of genes iteratively until the no more reduction in the loss function by splicing and increasing the probability of selecting the true subsets of genes; and (b) introducing add and del operators based on backward sacrifice into the splicing method to limit the size of gene subsets. Additionally, according to the experimental results, our proposed optimizer can obtain a better minimal subset of genes with a few iterations, compared with all considered algorithms. Moreover, the mutation operator is replaced by it to enhance exploitation capability and initial individuals are improved by it to enhance efficiency of search. A dataset of the body weight of Hu sheep was used to evaluate the superiority of the modified MA against the genetic algorithm. According to our experimental results, our proposed optimizer can obtain a better minimal subset of genes with a few iterations, compared with all considered algorithms including the most advanced adaptive best-subset selection algorithm.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Synergistic confluence of pervasive sensing, computing, and networking is generating heterogeneous data at unprecedented scale and complexity. Cloud computing has emergered in the last two decades as a unique storage and computing resource to support a diverse assortment of applications. Numerous organizations are migrating to the cloud to store and process their information. When the cloud infrastructures and resources are insufficient to satisfy end-users requests, scheduling mechanisms are required. Task scheduling, especially in a distributed and heterogeneous system is an NP-hard problem since various task parameters must be considered for an appropriate scheduling. In this paper we propose a hybrid PSO and extremal optimization-based approach to resolve task scheduling in the cloud. The algorithm optimizes makespan which is an important criterion to schedule a number of tasks on different Virtual Machines. Experiments on synthetic and real-life workloads show the capability of the method to successfully schedule task and outperforms many known methods of the state of the art.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Amal F. A. Iswisi ◽  
Oğuz Karan ◽  
Javad Rahebi

The damaged areas of brain tissues can be extracted by using segmentation methods, most of which are based on the integration of machine learning and data mining techniques. An important segmentation method is to utilize clustering techniques, especially the fuzzy C-means (FCM) clustering technique, which is sufficiently accurate and not overly sensitive to imaging noise. Therefore, the FCM technique is appropriate for multiple sclerosis diagnosis, although the optimal selection of cluster centers can affect segmentation. They are difficult to select because this is an NP-hard problem. In this study, the Harris Hawks optimization (HHO) algorithm was used for the optimal selection of cluster centers in segmentation and FCM algorithms. The HHO is more accurate than other conventional algorithms such as the genetic algorithm and particle swarm optimization. In the proposed method, every membership matrix is assumed as a hawk or an HHO member. The next step is to generate a population of hawks or membership matrices, the most optimal of which is selected to find the optimal cluster centers to decrease the multiple sclerosis clustering error. According to the tests conducted on a number of brain MRIs, the proposed method outperformed the FCM clustering and other techniques such as the k -NN algorithm, support vector machine, and hybrid data mining methods in accuracy.


2021 ◽  
Vol 2 (2) ◽  
pp. 165-185
Author(s):  
Md Moin Uddin Chowdhury ◽  
Ismail Guvenc ◽  
Walid Saad ◽  
Arupjyoti Bhuyan

To integrate unmanned aerial vehicles (UAVs) in future large-scale deployments, a new wireless communication paradigm, namely, the cellular-connected UAV has recently attracted interest. However, the line-of-sight dominant air-to-ground channels along with the antenna pattern of the cellular ground base stations (GBSs) introduce critical interference issues in cellular-connected UAV communications. In particular, the complex antenna pattern and the ground reflection (GR) from the down-tilted antennas create both coverage holes and patchy coverage for the UAVs in the sky, which leads to unreliable connectivity from the underlying cellular network. To overcome these challenges, in this paper, we propose a new cellular architecture that employs an extra set of co-channel antennas oriented towards the sky to support UAVs on top of the existing down-tilted antennas for ground user equipment (GUE). To model the GR stemming from the down-tilted antennas, we propose a path-loss model, which takes both antenna radiation pattern and configuration into account. Next, we formulate an optimization problem to maximize the minimum signal-to-interference ratio (SIR) of the UAVs by tuning the up-tilt (UT) angles of the up-tilted antennas. Since this is an NP-hard problem, we propose a genetic algorithm (GA) based heuristic method to optimize the UT angles of these antennas. After obtaining the optimal UT angles, we integrate the 3GPP Release-10 specified enhanced inter-cell interference coordination (eICIC) to reduce the interference stemming from the down-tilted antennas. Our simulation results based on the hexagonal cell layout show that the proposed interference mitigation method can ensure higher minimum SIRs for the UAVs over baseline methods while creating minimal impact on the SIR of GUEs.


2021 ◽  
Vol 11 (23) ◽  
pp. 11202
Author(s):  
Xiaojuan Ran ◽  
Xiangbing Zhou ◽  
Mu Lei ◽  
Worawit Tepsan ◽  
Wu Deng

With the development of cities, urban congestion is nearly an unavoidable problem for almost every large-scale city. Road planning is an effective means to alleviate urban congestion, which is a classical non-deterministic polynomial time (NP) hard problem, and has become an important research hotspot in recent years. A K-means clustering algorithm is an iterative clustering analysis algorithm that has been regarded as an effective means to solve urban road planning problems by scholars for the past several decades; however, it is very difficult to determine the number of clusters and sensitively initialize the center cluster. In order to solve these problems, a novel K-means clustering algorithm based on a noise algorithm is developed to capture urban hotspots in this paper. The noise algorithm is employed to randomly enhance the attribution of data points and output results of clustering by adding noise judgment in order to automatically obtain the number of clusters for the given data and initialize the center cluster. Four unsupervised evaluation indexes, namely, DB, PBM, SC, and SSE, are directly used to evaluate and analyze the clustering results, and a nonparametric Wilcoxon statistical analysis method is employed to verify the distribution states and differences between clustering results. Finally, five taxi GPS datasets from Aracaju (Brazil), San Francisco (USA), Rome (Italy), Chongqing (China), and Beijing (China) are selected to test and verify the effectiveness of the proposed noise K-means clustering algorithm by comparing the algorithm with fuzzy C-means, K-means, and K-means plus approaches. The compared experiment results show that the noise algorithm can reasonably obtain the number of clusters and initialize the center cluster, and the proposed noise K-means clustering algorithm demonstrates better clustering performance and accurately obtains clustering results, as well as effectively capturing urban hotspots.


Author(s):  
Rim van Wersch ◽  
Steven Kelk ◽  
Simone Linz ◽  
Georgios Stamoulis

AbstractPhylogenetic trees are leaf-labelled trees used to model the evolution of species. Here we explore the practical impact of kernelization (i.e. data reduction) on the NP-hard problem of computing the TBR distance between two unrooted binary phylogenetic trees. This problem is better-known in the literature as the maximum agreement forest problem, where the goal is to partition the two trees into a minimum number of common, non-overlapping subtrees. We have implemented two well-known reduction rules, the subtree and chain reduction, and five more recent, theoretically stronger reduction rules, and compare the reduction achieved with and without the stronger rules. We find that the new rules yield smaller reduced instances and thus have clear practical added value. In many cases they also cause the TBR distance to decrease in a controlled fashion, which can further facilitate solving the problem in practice. Next, we compare the achieved reduction to the known worst-case theoretical bounds of $$15k-9$$ 15 k - 9 and $$11k-9$$ 11 k - 9 respectively, on the number of leaves of the two reduced trees, where k is the TBR distance, observing in both cases a far larger reduction in practice. As a by-product of our experimental framework we obtain a number of new insights into the actual computation of TBR distance. We find, for example, that very strong lower bounds on TBR distance can be obtained efficiently by randomly sampling certain carefully constructed partitions of the leaf labels, and identify instances which seem particularly challenging to solve exactly. The reduction rules have been implemented within our new solver Tubro which combines kernelization with an Integer Linear Programming (ILP) approach. Tubro also incorporates a number of additional features, such as a cluster reduction and a practical upper-bounding heuristic, and it can leverage combinatorial insights emerging from the proofs of correctness of the reduction rules to simplify the ILP.


2021 ◽  
Author(s):  
Ali Fattahi ◽  
Sriram Dasu ◽  
Reza Ahmadi

We study a new parts-procurement planning problem that is motivated by a global auto manufacturer (GAM) that practices mass customization. Because of the astronomically large number of producible configurations, forecasting their demand is impossible. Instead, firms forecast demand for options that constitute a vehicle. Requirements for many parts (up to 60%) are based on the combinations of options in a fully configured vehicle. The options’ forecast, however, does not map into a unique configuration-level forecast. As a result, the options’ forecast translates into ranges for many parts’ requirements. The combined ranges of a set of parts are not always equal to the sum of the component ranges; they may be less. Determining parts ranges is a large-scale NP-hard problem. Large ranges and inaccurate calculation of these ranges can result in excess inventories, shortages in inventories, and suboptimal flexibility levels. We model and analyze the problem of allocating parts to suppliers and accurately computing the ranges to minimize procurement costs arising because of ranges. The range costs are assumed to be convex increasing. We perform extensive numerical analysis using a large set of randomly generated instances as well as eight industrial instances received from GAM to establish the quality of our approximation framework. Our proposed approach significantly reduces the error in range estimates relative to current industry practice. In addition, the proposed approach for allocations of parts to suppliers reduces joint-parts ranges by an average of 29.87% relative to that of current practice. This paper was accepted by Jeannette Song, operations management.


Sign in / Sign up

Export Citation Format

Share Document