scholarly journals AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

Author(s):  
Marius Lindauer ◽  
Frank Hutter ◽  
Holger H. Hoos ◽  
Torsten Schaub

Algorithm selection (AS) techniques -- which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently -- have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. In this extended abstract of our 2015 JAIR article of the same title, we summarize AutoFolio, which uses an algorithm configuration procedure to automatically select an AS approach and optimize its parameters for a given AS scenario. AutoFolio allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods and to automatically construct high-performance algorithm selectors. We demonstrate that AutoFolio was able to produce new state-of-the-art algorithm selectors for 7 well-studied AS scenarios and matches state-of-the-art performance statistically on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieved average speedup factors between 1.3 and 15.4.

2019 ◽  
Vol 27 (1) ◽  
pp. 3-45 ◽  
Author(s):  
Pascal Kerschke ◽  
Holger H. Hoos ◽  
Frank Neumann ◽  
Heike Trautmann

It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algorithms, for each problem instance to be solved the one expected to perform best. The task of automatically selecting an algorithm from a given set is known as the per-instance algorithm selection problem and has been intensely studied over the past 15 years, leading to major improvements in the state of the art in solving a growing number of discrete combinatorial problems, including propositional satisfiability and AI planning. Per-instance algorithm selection also shows much promise for boosting performance in solving continuous and mixed discrete/continuous optimisation problems. This survey provides an overview of research in automated algorithm selection, ranging from early and seminal works to recent and promising application areas. Different from earlier work, it covers applications to discrete and continuous problems, and discusses algorithm selection in context with conceptually related approaches, such as algorithm configuration, scheduling, or portfolio selection. Since informative and cheaply computable problem instance features provide the basis for effective per-instance algorithm selection systems, we also provide an overview of such features for discrete and continuous problems. Finally, we provide perspectives on future work in the area and discuss a number of open research challenges.


2015 ◽  
Vol 53 ◽  
pp. 745-778 ◽  
Author(s):  
Marius Lindauer ◽  
Holger H. Hoos ◽  
Frank Hutter ◽  
Torsten Schaub

Algorithm selection (AS) techniques -- which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently -- have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4.


1992 ◽  
Vol 36 (5) ◽  
pp. 821-828 ◽  
Author(s):  
K. H. Brown ◽  
D. A. Grose ◽  
R. C. Lange ◽  
T. H. Ning ◽  
P. A. Totta

2021 ◽  
Vol 14 (4) ◽  
pp. 1-28
Author(s):  
Tao Yang ◽  
Zhezhi He ◽  
Tengchuan Kou ◽  
Qingzheng Li ◽  
Qi Han ◽  
...  

Field-programmable Gate Array (FPGA) is a high-performance computing platform for Convolution Neural Networks (CNNs) inference. Winograd algorithm, weight pruning, and quantization are widely adopted to reduce the storage and arithmetic overhead of CNNs on FPGAs. Recent studies strive to prune the weights in the Winograd domain, however, resulting in irregular sparse patterns and leading to low parallelism and reduced utilization of resources. Besides, there are few works to discuss a suitable quantization scheme for Winograd. In this article, we propose a regular sparse pruning pattern in the Winograd-based CNN, namely, Sub-row-balanced Sparsity (SRBS) pattern, to overcome the challenge of the irregular sparse pattern. Then, we develop a two-step hardware co-optimization approach to improve the model accuracy using the SRBS pattern. Based on the pruned model, we implement a mixed precision quantization to further reduce the computational complexity of bit operations. Finally, we design an FPGA accelerator that takes both the advantage of the SRBS pattern to eliminate low-parallelism computation and the irregular memory accesses, as well as the mixed precision quantization to get a layer-wise bit width. Experimental results on VGG16/VGG-nagadomi with CIFAR-10 and ResNet-18/34/50 with ImageNet show up to 11.8×/8.67× and 8.17×/8.31×/10.6× speedup, 12.74×/9.19× and 8.75×/8.81×/11.1× energy efficiency improvement, respectively, compared with the state-of-the-art dense Winograd accelerator [20] with negligible loss of model accuracy. We also show that our design has 4.11× speedup compared with the state-of-the-art sparse Winograd accelerator [19] on VGG16.


1967 ◽  
Vol 71 (677) ◽  
pp. 342-343
Author(s):  
F. H. East

The Aviation Group of the Ministry of Technology (formerly the Ministry of Aviation) is responsible for spending a large part of the country's defence budget, both in research and development on the one hand and production or procurement on the other. In addition, it has responsibilities in many non-defence fields, mainly, but not exclusively, in aerospace.Few developments have been carried out entirely within the Ministry's own Establishments; almost all have required continuous co-operation between the Ministry and Industry. In the past the methods of management and collaboration and the relative responsibilities of the Ministry and Industry have varied with time, with the type of equipment to be developed, with the size of the development project and so on. But over the past ten years there has been a growing awareness of the need to put some system into the complex business of translating a requirement into a specification and a specification into a product within reasonable bounds of time and cost.


2020 ◽  
Author(s):  
Fei Qi ◽  
Zhaohui Xia ◽  
Gaoyang Tang ◽  
Hang Yang ◽  
Yu Song ◽  
...  

As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.


Semiotica ◽  
2019 ◽  
Vol 2019 (228) ◽  
pp. 223-235
Author(s):  
Winfried Nöth

AbstractThe paper begins with a survey of the state of the art in multimodal research, an international trend in applied semiotics, linguistics, and media studies, and goes on to compare its approach to verbal and nonverbal signs to Charles S. Peirce’s approach to signs and their classification. The author introduces the concept of transmodality to characterize the way in which Peirce’s classification of signs reflects the modes of multimodality research and argues that Peirce’s classification of the signs takes modes and modalities in two different respects into consideration, (1) from the perspective of the sign and (2) from the one of its interpretant. While current research in multimodality has its focus on the (external) sign in a communicative process, Peirce considers additionally the multimodality of the interpretants, i.e., the mental icons and indexical scenarios evoked in the interpreters’ minds. The paper illustrates and comments on the Peircean method of studying the multi and transmodality of signs in an analysis of Peirce’s close reading of Luke 19:30 in MS 599, Reason’s Rules, of c. 1902. As a sign, this text is “monomodal” insofar as it consists of printed words only. The study shows in which respects the interpretants of this text evince trans and multimodality.


2020 ◽  
Vol 46 (2) ◽  
pp. 299-311
Author(s):  
Giorgio (Georg) Orlandi

Abstract The book under review serves as a significant contribution to the field of Trans-Himalayan linguistics. Designed as a vade mecum for readers with little linguistic background in these three languages, Nathan W. Hill’s work attempts, on the one hand, a systematic exploration of the shared history of Burmese, Tibetan and Chinese, and, on the other, a general introduction to the reader interested in obtaining an overall understanding of the state of the art of the historical phonology of these three languages. Whilst it is acknowledged that the book in question has the potential to be a solid contribution to the field, it is also felt that few minor issues can be also addressed.


2019 ◽  
Vol 6 (2) ◽  
pp. 20-32
Author(s):  
Daniel Osezua Aikhuele

In this article, the effectiveness of the intuitionistic fuzzy TOPSIS model (IF-TOPSISEF) is tested for addressing, capturing, and resolving the effect of correlation between attributes, otherwise called the dependency of attributes. This was achieved by using several normalization methods in the implementation of the IF-TOPSISEF model. Furthermore, the result of the computation is compared with the one obtained when the normalization methods are implemented using a traditional TOPSIS model. The study contributes and extends the state of the art in TOPSIS method study, by addressing, capturing and resolving the effect of correlation between attributes otherwise called dependency of attributes.


Author(s):  
Jason P.C. Chiu ◽  
Eric Nichols

Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.


Sign in / Sign up

Export Citation Format

Share Document