scholarly journals Learning-Based Abstractions for Nonlinear Constraint Solving

Author(s):  
Sumanth Dathathri ◽  
Nikos Arechiga ◽  
Sicun Gao ◽  
Richard M. Murray

We propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. The proposed approach decomposes the original set of constraints into smaller subsets, and uses learning algorithms to propose sequences of abstractions that take the form of conjunctions of classifiers. The core procedure is a refinement loop that keeps improving the learned results based on counterexamples that are obtained from partial constraints that are easy to solve. Experiments show that the proposed techniques significantly improve the performance of state-of-the-art constraint solvers on many challenging benchmarks. The mechanism is capable of producing intermediate symbolic abstractions that are also important for many applications and for understanding the internal structures of hard constraint solving problems.

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


2004 ◽  
Vol 19 (1) ◽  
pp. 1-25 ◽  
Author(s):  
SARVAPALI D. RAMCHURN ◽  
DONG HUYNH ◽  
NICHOLAS R. JENNINGS

Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Jiusheng Chen ◽  
Xiaoyu Zhang ◽  
Kai Guo

A large vector-angular region and margin (LARM) approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP) solver takesO(n3)training time and at leastO(n2)space, which might be computational prohibitive for large scale problems. By(1+ε)  and  (1-ε)-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.


2017 ◽  
Vol 26 (08) ◽  
pp. 1730015 ◽  
Author(s):  
Vanessa Graber ◽  
Nils Andersson ◽  
Michael Hogg

Neutron stars are astrophysical laboratories of many extremes of physics. Their rich phenomenology provides insights into the state and composition of matter at densities which cannot be reached in terrestrial experiments. Since the core of a mature neutron star is expected to be dominated by superfluid and superconducting components, observations also probe the dynamics of large-scale quantum condensates. The testing and understanding of the relevant theory tend to focus on the interface between the astrophysics phenomenology and nuclear physics. The connections with low-temperature experiments tend to be ignored. However, there has been dramatic progress in understanding laboratory condensates (from the different phases of superfluid helium to the entire range of superconductors and cold atom condensates). In this review, we provide an overview of these developments, compare and contrast the mathematical descriptions of laboratory condensates and neutron stars and summarize the current experimental state-of-the-art. This discussion suggests novel ways that we may make progress in understanding neutron star physics using low-temperature laboratory experiments.


Author(s):  
Wenjun Tang ◽  
Rong Chen ◽  
Shikai Guo

In recent years, crowdsourcing has gradually become a promising way of using netizens to accomplish tiny tasks on, or even complex works through crowdsourcing workflows that decompose them into tiny ones to publish sequentially on the crowdsourcing platforms. One of the significant challenges in this process is how to determine the parameters for task publishing. Still some technique applied constraint solving to select the optimal tasks parameters so that the total cost of completing all tasks is minimized. However, experimental results show that computational complexity makes these tools unsuitable for solving large-scale problems because of its excessive execution time. Taking into account the real-time requirements of crowdsourcing, this study uses a heuristic algorithm with four heuristic strategies to solve the problem in order to reduce execution time. The experiment results also show that the proposed heuristic strategies produce good quality approximate solutions in an acceptable timeframe.


Author(s):  
Michael Perrot ◽  
Ulrike von Luxburg

We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form ``object A is closer to object B than to object C.'' In this paper we introduce TripletBoost, a new method that can learn a classifier just from such triplet comparisons. The main idea is to aggregate the triplets information into weak classifiers, which can subsequently be boosted to a strong classifier. Our method has two main advantages: (i) it is applicable to data from any metric space, and (ii) it can deal with large scale problems using only passively obtained and noisy triplets. We derive theoretical generalization guarantees and a lower bound on the number of necessary triplets, and we empirically show that our method is both competitive with state of the art approaches and resistant to noise.


2020 ◽  
Vol 34 (05) ◽  
pp. 7293-7300
Author(s):  
Weixun Wang ◽  
Tianpei Yang ◽  
Yong Liu ◽  
Jianye Hao ◽  
Xiaotian Hao ◽  
...  

A lot of efforts have been devoted to investigating how agents can learn effectively and achieve coordination in multiagent systems. However, it is still challenging in large-scale multiagent settings due to the complex dynamics between the environment and agents and the explosion of state-action space. In this paper, we design a novel Dynamic Multiagent Curriculum Learning (DyMA-CL) to solve large-scale problems by starting from learning on a multiagent scenario with a small size and progressively increasing the number of agents. We propose three transfer mechanisms across curricula to accelerate the learning process. Moreover, due to the fact that the state dimension varies across curricula, and existing network structures cannot be applied in such a transfer setting since their network input sizes are fixed. Therefore, we design a novel network structure called Dynamic Agent-number Network (DyAN) to handle the dynamic size of the network input. Experimental results show that DyMA-CL using DyAN greatly improves the performance of large-scale multiagent learning compared with state-of-the-art deep reinforcement learning approaches. We also investigate the influence of three transfer mechanisms across curricula through extensive simulations.


Algorithms ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 6
Author(s):  
Joonas Hämäläinen ◽  
Tommi Kärkkäinen ◽  
Tuomo Rossi

Two new initialization methods for K-means clustering are proposed. Both proposals are based on applying a divide-and-conquer approach for the K-means‖ type of an initialization strategy. The second proposal also uses multiple lower-dimensional subspaces produced by the random projection method for the initialization. The proposed methods are scalable and can be run in parallel, which make them suitable for initializing large-scale problems. In the experiments, comparison of the proposed methods to the K-means++ and K-means‖ methods is conducted using an extensive set of reference and synthetic large-scale datasets. Concerning the latter, a novel high-dimensional clustering data generation algorithm is given. The experiments show that the proposed methods compare favorably to the state-of-the-art by improving clustering accuracy and the speed of convergence. We also observe that the currently most popular K-means++ initialization behaves like the random one in the very high-dimensional cases.


Sign in / Sign up

Export Citation Format

Share Document