scholarly journals Constant-Time Algorithms for Continuous Optimization Problems

2021 ◽  
pp. 31-45
Author(s):  
Yuichi Yoshida

AbstractIn this chapter, we consider constant-time algorithms for continuous optimization problems. Specifically, we consider quadratic function minimization and tensor decomposition, both of which have numerous applications in machine learning and data mining. The key component in our analysis is graph limit theory, which was originally developed to study graphs analytically.

2021 ◽  
Vol 8 (4) ◽  
pp. 041418
Author(s):  
Blake A. Wilson ◽  
Zhaxylyk A. Kudyshev ◽  
Alexander V. Kildishev ◽  
Sabre Kais ◽  
Vladimir M. Shalaev ◽  
...  

2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Jong-Chen Chen

Continuous optimization plays an increasingly significant role in everyday decision-making situations. Our group had previously developed a multilevel system called the artificial neuromolecular system (ANM) that possessed structure richness allowing variation and/or selection operators to act on it in order to generate a broad range of dynamic behaviors. In this paper, we used the ANM system to control the motions of a wooden walking robot named Miky. The robot was used to investigate the ANM system's capability to deal with continuous optimization problems through self-organized learning. Evolutionary learning algorithm was used to train the system and generate appropriate control. The experimental results showed that Miky was capable of learning in a continued manner in a physical environment. A further experiment was conducted by making some changes to Miky's physical structure in order to observe the system's capability to deal with the change. Detailed analysis of the experimental results showed that Miky responded to the change by appropriately adjusting its leg movements in space and time. The results showed that the ANM system possessed continuous optimization capability in coping with the change. Our findings from the empirical experiments might provide us another dimension of information of how to design an intelligent system comparatively friendlier than the traditional systems in assisting humans to walk.


2020 ◽  
Vol 34 (05) ◽  
pp. 7111-7118
Author(s):  
Moumita Choudhury ◽  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of several distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that can explicitly model a problem containing continuous variables. Nevertheless, state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.


Symmetry ◽  
2018 ◽  
Vol 10 (8) ◽  
pp. 337 ◽  
Author(s):  
Chui-Yu Chiu ◽  
Po-Chou Shih ◽  
Xuechao Li

A novel global harmony search (NGHS) algorithm, as proposed in 2010, is an improved algorithm that combines the harmony search (HS), particle swarm optimization (PSO), and a genetic algorithm (GA). Moreover, the fixed parameter of mutation probability was used in the NGHS algorithm. However, appropriate parameters can enhance the searching ability of a metaheuristic algorithm, and their importance has been described in many studies. Inspired by the adjustment strategy of the improved harmony search (IHS) algorithm, a dynamic adjusting novel global harmony search (DANGHS) algorithm, which combines NGHS and dynamic adjustment strategies for genetic mutation probability, is introduced in this paper. Moreover, extensive computational experiments and comparisons are carried out for 14 benchmark continuous optimization problems. The results show that the proposed DANGHS algorithm has better performance in comparison with other HS algorithms in most problems. In addition, the proposed algorithm is more efficient than previous methods. Finally, different strategies are suitable for different situations. Among these strategies, the most interesting and exciting strategy is the periodic dynamic adjustment strategy. For a specific problem, the periodic dynamic adjustment strategy could have better performance in comparison with other decreasing or increasing strategies. These results inspire us to further investigate this kind of periodic dynamic adjustment strategy in future experiments.


Sign in / Sign up

Export Citation Format

Share Document