domain optimization
Recently Published Documents


TOTAL DOCUMENTS

160
(FIVE YEARS 24)

H-INDEX

16
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Gabriel Lasry ◽  
Yaniv Brick ◽  
Timor Melamed

2021 ◽  
Vol 8 (5) ◽  
pp. 871
Author(s):  
Anang Firdaus ◽  
Ahmad Muklason ◽  
Vicha Azthanty Supoyo

<p>Sebuah organisasi terkadang membutuhkan solusi untuk permasalahan optimasi lintas domain. Permasalahan optimasi lintas domain merupakan permasalahan yang memiliki karakteristik berbeda, misalnya antar domain optimasi penjadwalan, rute kendaraan, bin packing, dan SAT. Optimasi tersebut digunakan untuk mendukung pengambilan keputusan sebuah organisasi. Dalam menyelesaikan permasalahan optimasi tersebut, dibutuhkan metode pencarian komputasi. Di literatur, hampir semua permasalahan optimasi dalam kelas NP-hard diselesaikan dengan pendekatan meta-heuristics. Akan tetapi meta-heuristic ini memiliki kekurangan, yaitu diperlukan <em>parameter tunning</em> untuk setiap problem domain yang berbeda. Sehingga pendekatan ini dirasa kurang efektif. Oleh karena itu diperlukan pendekatan baru, yaitu pendekatan hyper-heuristics. Metode hyper-heuristic merupakan metode pencarian komputasi approximate yang dapat menyelesaikan permasalahan optimasi lintas domain dengan waktu lebih cepat. Lintas domain permasalahan yang akan diselesaikan ada enam, yaitu satisfiability (SAT), one dimensional bin packing, permutation flow shop, personnel scheduling, travelling salesman problem (TSP), dan vehicle routing problem (VRP). Dalam meningkatkan kinerja, penelitian ini menguji pengaruh dari adaptasi algoritma Reinforcement Learning (RL) sebagai strategi seleksi LLH dikombinasikan dengan algoritma Late Acceptance sebagai move acceptance, selanjutnya disebut algoritma Reinforcement Learning-Late acceptance (RL-LA). Untuk mengetahui efektivitas performa dari algoritma RL-LA, performa algoritma RL-LA yang diusulkan dibandingkan dengan algoritma Simple Random-Late Acceptance (SR-LA). Hasil dari penelitian ini menunjukan bahwa algoritma yang diusulkan, i.e. RL-LA lebih unggul dari SR-LA pada  4 dari 6 domain permasalahan uji coba, yaitu SAT, personnel scheduling, TSP, dan VRP, sedangkan pada domain lainnya seperti bin packing dan flow shop mengalami penurunan. Secara lebih spesifik, RL-LA dapat meningkatkan peforma pencarian dalam menemukan solusi optimal pada 18 instance dari 30 instance atau sebesar 64%, dan jika dilihat dari nilai median dan minimum metode RL-LA lebih unggul 28% dari metode SR-LA.  Kontribusi utama dari penelitian ini adalah studi performa algoritma hibrida reinforcement learning dan late acceptance dalam kerangka kerja hyper-heuristics untuk menyelesiakan permasalahan optimasi lintas domain.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>An organization sometimes needs solutions to cross domain optimization problems. The problem of cross domain optimization is a problem that has different characteristics, for example between domain optimization scheduling, vehicle routes, bin packing, and SAT. This optimization is used to support an organization's decision making. In solving these optimization problems, a computational search method is needed. In the literature, almost all optimization problems in NP-hard class are solved by meta-heuristics approach. However, this meta-heuristic has drawbacks, namely tuning parameters are needed for each different problem domain. So this approach is considered less effective. Therefore a new approach is needed, namely the hyper-heuristics approach. Hyper-heuristic method is an approximate computational search method that can solve cross domain optimization problems faster. In this final project there are six cross domain problems to be solved, namely satisfaction (SAT), one dimensional bin packing, permutation flow shop, personnel scheduling, traveling salesman problem (TSP), and vehicle routing problem (VRP). In improving performance, this study examines the effect of the adaptation of the Reinforcement Learning (RL) algorithm as LLH selection combined with the Late Acceptance algorithm as a move acceptance. The results of this study indicate that there are 4 out of six problem domains that have improved performance, namely the SAT, personnel scheduling, TSP, and VRP, while in other domains such as bin packing and flow shop has decreased.</em></p><p><em><strong><br /></strong></em></p>


Author(s):  
Zhaoxue Deng ◽  
Xinxin Wei ◽  
Xingquan Li ◽  
Shuen Zhao ◽  
Sunke Zhu

Mostly, magnetorheological (MR) dampers were optimized based on individual performance, without considering the influence of structure parameters change on vehicle performance. Therefore, a multi-objective optimization scheme of MR damper based on vehicle dynamics model was proposed. The finite element method was used to analyze magnetic flux density distribution in tapered damping channel under different structure parameters. Furthermore, the damping force expression of the tapered flow mode MR damper was derived, and the damping force was introduced into the vehicle dynamics model. In order to improve the ride comfort and operation stability of the vehicle, a collaborative optimization platform combining magnetic circuit finite element analysis and vehicle dynamics model was established. Based on this platform, the optimal design variables were determined by comfort and stability sensitivity analysis. The time domain optimization objective and frequency domain optimization objective are proposed simultaneously to overcome the lack of time domain optimization objective. The results show that compared with the time domain optimization and the initial design, the suspension dynamic deflection, tire dynamic load and vehicle body vertical acceleration are decreased after the time-frequency optimization. At the same time, in the frequency domain, the amplitude of vibration acceleration in each working condition is significantly reduced.


2021 ◽  
pp. 102699
Author(s):  
Maria Ferrara ◽  
Jean Christophe Vallée ◽  
Louena Shtrepi ◽  
Arianna Astolfi ◽  
Enrico Fabrizio

2021 ◽  
Vol 30 (3) ◽  
pp. 584-594
Author(s):  
LI Na ◽  
ZHANG Shufang ◽  
ZHANG Jingbo ◽  
HUAI Shuaiheng ◽  
JIANG Yi

Author(s):  
Wenhui Zhang ◽  
Chenyu Wang ◽  
Wenjie Lin ◽  
Jiming Lin

Improved ant colony optimization (ACO) algorithms for continuous-domain optimization have been widely applied in recent years, but these improved methods have a weak perception of environmental information changes and only rely on the residues of the pheromones in the path to guide colony evolution. In this paper, we propose an ant colony algorithm based on the reinforcement learning model (RLACO). RLACO can acquire more environmental information by calculating the diversity of the ant colony, and, uses the diversity and other basic information of the ant colony to establish a reinforcement learning model. At different stages of evolution, the algorithm chooses an optimal strategy that can maximize the reward to improve the global search ability and convergence speed of the colony. The experimental results on CEC 2017 test functions show that the proposed algorithm is superior to other algorithms for continuous-domain optimization in convergence speed, accuracy and global search ability.


Sign in / Sign up

Export Citation Format

Share Document