A note on no-free-lunch theorem

Author(s):  
Lidong Wu

The No-Free-Lunch theorem is an interesting and important theoretical result in machine learning. Based on philosophy of No-Free-Lunch theorem, we discuss extensively on the limitation of a data-driven approach in solving NP-hard problems.

2022 ◽  
Vol 65 (1) ◽  
pp. 76-85
Author(s):  
Lance Fortnow

Advances in algorithms, machine learning, and hardware can help tackle many NP-hard problems once thought impossible.


Machine learning and artificial intelligence have evolved beyond simple hype and have integrated themselves in business and in popular conversation as an increasing number of smart applications profoundly transform the way we work and live. This article defines machine learning in terms of potential benefits and pitfalls for a nontechnical audience, and gives examples of popular and powerful machine learning algorithms: k-means clustering, principal component analysis, and artificial neural networks. Three important philosophical challenges of machine learning are introduced: the no free lunch theorem, the curse of dimensionality, and the bias–variance trade-off.


2019 ◽  
Vol 28 (2) ◽  
pp. 121-134 ◽  
Author(s):  
ANDREAS HOLZINGER ◽  
MARKUS PLASS ◽  
KATHARINA HOLZINGER ◽  
GLORIA CERASELA CRIS¸AN ◽  
CAMELIA-M. PINTEA ◽  
...  

The ultimate goal of the Machine Learning (ML) community is to develop algorithms that can automatically learn from data, to extract knowledge and to make decisions without any human intervention. Specifically, automatic Machine Learning (aML) approaches show impressive success, e.g. in speech/image recognition or autonomous drive and smart car industry. Recent results even demonstrate intriguingly that deep learning applied for automatic classification of skin lesions is on par with the performance of dermatologists, yet outperforms the average human efficiency. As human perception is inherently limited to 3D environments, such approaches can discover patterns, e.g. that two objects are similar, in arbitrarily high-dimensional spaces what no human is able to do. Humans can deal simultaneously only with limited amounts of data, whilst “big data” is not only beneficial but necessary for aML. However, in health informatics, there are few data sets; aML approaches often suffer from insufficient training samples. Many problems are computationally hard, e.g. subspace clustering, k-anonymization, or protein folding. Here, interactive machine learning (iML) could be successfully used, as a human-in-the-loop contributes to reduce a huge search space through heuristic selection of suitable samples. This can reduce the complexity of NP-hard problems through the knowledge brought in by a human agent involved into the learning algorithm. A huge motivation for iML is that standard black-box approaches lack transparency, hence do not foster trust and acceptance of ML among end-users. Most of all, rising legal and privacy aspects, e.g. the European General Data Protection Regulations (GDPR) make black-box approaches difficult to use, because they often are not able to explain why a decision has been made, e.g. why two objects are similar. All these reasons motivate the idea to open the black-box to a glass-box. In this paper, we present some experiments to demonstrate the effectiveness of the iML human-in-the-loop model, in particular when using a glass-box instead of a black-box model and thus enabling a human directly to interact with a learning algorithm. We selected the Ant Colony System (ACS) algorithm, and applied it on the Traveling Salesman Problem (TSP). The TSP-problem is a good example, because it is of high relevance for health informatics as for example on protein folding problem, thus of enormous importance for fostering cancer research. Finally, from studies of learning from observation, i.e. of how humans extract so much from so little data, fundamental ML-research also may benefit.


2016 ◽  
Vol 28 (1) ◽  
pp. 216-228 ◽  
Author(s):  
David Gómez ◽  
Alfonso Rojas

A sizable amount of research has been done to improve the mechanisms for knowledge extraction such as machine learning classification or regression. Quite unintuitively, the no free lunch (NFL) theorem states that all optimization problem strategies perform equally well when averaged over all possible problems. This fact seems to clash with the effort put forth toward better algorithms. This letter explores empirically the effect of the NFL theorem on some popular machine learning classification techniques over real-world data sets.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


2010 ◽  
Vol 10 (1&2) ◽  
pp. 141-151
Author(s):  
S. Beigi

Although it is believed unlikely that $\NP$-hard problems admit efficient quantum algorithms, it has been shown that a quantum verifier can solve NP-complete problems given a "short" quantum proof; more precisely, NP\subseteq QMA_{\log}(2) where QMA_{\log}(2) denotes the class of quantum Merlin-Arthur games in which there are two unentangled provers who send two logarithmic size quantum witnesses to the verifier. The inclusion NP\subseteq QMA_{\log}(2) has been proved by Blier and Tapp by stating a quantum Merlin-Arthur protocol for 3-coloring with perfect completeness and gap 1/24n^6. Moreover, Aaronson et al. have shown the above inclusion with a constant gap by considering $\widetilde{O}(\sqrt{n})$ witnesses of logarithmic size. However, we still do not know if QMA_{\log}(2) with a constant gap contains NP. In this paper, we show that 3-SAT admits a QMA_{\log}(2) protocol with the gap 1/n^{3+\epsilon}} for every constant \epsilon>0.


Sign in / Sign up

Export Citation Format

Share Document