scale optimization
Recently Published Documents


TOTAL DOCUMENTS

580
(FIVE YEARS 145)

H-INDEX

36
(FIVE YEARS 8)

Author(s):  
Douglas L Zentner ◽  
Joshua K Raabe ◽  
Timothy K Cross ◽  
Peter C Jacobson

Scale and hierarchy have received less attention in aquatic systems compared to terrestrial. Walleye Sander vitreus spawning habitat offers an opportunity to investigate scale’s importance. We estimated lake-, transect-, and quadrat-scale influences on nearshore walleye egg deposition in 28 Minnesota lakes from 2016-2018. Random forest models (RFM) estimated importance of predictive variables to walleye egg deposition. Predictive accuracies of a multi-scale classification tree (CT) and a quadrat-scale CT were compared. RFM results suggested that five of our variables were unimportant when predicting egg deposition. The multi-scale CT was more accurate than the quadrat-scale CT when predicting egg deposition. Both model results suggest that in-lake egg deposition by walleye is regulated by hierarchical abiotic processes and that silt/clay abundance at the transect-scale (reef-scale) is more important than abundance at the quadrat-scale (within-reef). Our results show machine learning can be used for scale-optimization and potentially to determine cross-scale interactions. Further incorporation of scale and hierarchy into studies of aquatic systems will increase our understanding of species-habitat relationships, especially in lentic systems where multi-scale approaches are rarely used.


Author(s):  
Chenhua Geng ◽  
Hong-Ye Hu ◽  
Yijian Zou

Abstract Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly-Vidal method, we show that auto-differentiation has a better performance for both stability and accuracy. We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model. We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree with the theory well.


2022 ◽  
Author(s):  
Chnoor M. Rahman ◽  
Tarik A. Rashid ◽  
Abeer Alsadoon ◽  
Nebojsa Bacanin ◽  
Polla Fattah ◽  
...  

<p></p><p></p><p>The dragonfly algorithm developed in 2016. It is one of the algorithms used by the researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.</p><p></p><p></p>


2021 ◽  
pp. 1-14
Author(s):  
Zhaoming Lv ◽  
Rong Peng

The grasshopper optimization algorithm (GOA) has received extensive attention from scholars in various real applications in recent years because it has a high local optima avoidance mechanism compared to other meta-heuristic algorithms. However, the small step moves of grasshopper lead to slow convergence. When solving larger-scale optimization problems, this shortcoming needs to be solved. In this paper, an enhanced grasshopper optimization algorithm based on solitarious and gregarious states difference is proposed. The algorithm consists of three stages: the first stage simulates the behavior of solitarious population learning from gregarious population; the second stage merges the learned population into the gregarious population and updates each grasshopper; and the third stage introduces a local operator to the best position of the current generation. Experiments on the benchmark function show that the proposed algorithm is better than the four representative GOAs and other metaheuristic algorithms in more cases. Experiments on the ontology matching problem show that the proposed algorithm outperforms all metaheuristic-based method and beats more the state-of-the-art systems.


Sign in / Sign up

Export Citation Format

Share Document